Skip to content

Blog


How to Speed up Local Development of a Docker Application running on AWS

March 7, 2023

|
Mac Watrous

Mac Watrous

While most engineering tooling at DoorDash is focused on making safe incremental improvements to existing systems, in part by testing in production (learn more about our end-to-end testing strategy), this is not always the best approach when launching an entirely new business line. Building from scratch often requires faster prototyping and customer validation than incremental improvements to an existing system. In the New Verticals organization at DoorDash, we are launching and growing new categories such as alcohol and other regulated goods, health, retail, convenience, and grocery. Often we’re going from zero to one. We needed to move quite fast during one recent expansion of our business, which required a local development experience that could keep up. In this article we will provide some context and then explain how we were able to speed up our development by enabling easy local development with PostgreSQL 

Deviating from the typical DoorDash dev environment

Ideally, the infrastructure and requirements already are in place when we develop a backend microservice, which typically is the case for new applications at DoorDash. Concrete requirements and existing infrastructure streamline the path for development environments to integrate easily and safely, eliminating some of the need for rapid iteration because the application design can be front-loaded based on the requirements. Such existing stability helps avoid unexpected behavior within the application, ensuring that deployments are a safe operation.

However, this entirely new microservice could not be built on any existing infrastructure for compliance reasons. Instead, we had to develop our application in parallel with infrastructure planning and spin-up. As backend developers, we needed to stay unblocked while the infrastructure — in this case AWS resources — was being created. Our backend had to be developer-friendly and allow the team to iterate rapidly to deal with evolving requirements and work independently on separate tasks without the testing interrupting anyone’s work. With the amorphous nature of the task, the typical DoorDash local development environment approach was not suitable.

Creating a new local development approach

To kick off the creation of a local dev environment, we first had to take stock of our desired infrastructure as well as our available tooling and resources before charting out how to set up the environment quickly and efficiently. We knew we’d be deploying a Docker container to Fargate as well as using an Amazon Aurora PostgreSQL database and Terraform to model our infrastructure as code. It was fair to assume that we would use other AWS services, particularly SQS and AWS Secrets Manager

One local development approach would have been to mock, or create dummy versions of our cloud resources. Local mocks may work well under some circumstances, but it’s difficult to be fully confident in the final end-to-end experience of an application because the mocks may be incorrect, lack important features, or ultimately have unanticipated behaviors.

Given these considerations, we developed a strategy for architecting our local development environment that would maximize the tradeoffs between development speed, ease of use, and how closely it matches production. We broke the strategy into four steps: 

  1. Use Docker Compose for our Docker application and all of its required resources.
  2. Set up a locally running containerized PostgreSQL database.
  3. Use LocalStack to enable locally running AWS resources.
  4. Utilize Terraform to create consistent AWS resources in LocalStack.

Understanding the tradeoffs 

Our local development approach involved a number of benefits and drawbacks, including: 

Pros:

  • Quick and easy for anyone new to the project to get the local development environment running for themselves.
  • Consistent local development environments between machines and between environment startups.
  • No chance of accidentally touching any production data or systems.
  • Easy to iterate on the desired infrastructure and add new application capabilities. 
  • No infrastructure required in the cloud.
  • No long waits during startup. Initial runs require some extra time to download Docker images, but each subsequent startup should be speedy.
  • All mastered in code, with the application and its complete environment mapped out via Docker Compose and Terraform.
  • Backend microservice framework and language agnostic.

Cons:

  • Because it’s not actually running in production, the final result may not be an entirely accurate reflection of how the microservice ultimately will perform in production.
  • Because there is no interaction with any production resources or data, it can be tough to create the dummy data needed to accurately reflect all testing scenarios.
  • Adding additional home-grown microservices that have their own dependencies may not be straightforward and may become unwieldy. 
  • As the application and infrastructure grows, running everything locally may become a resource drain on an engineer's machine.
  • Some LocalStack AWS services do not have 1:1 feature parity with AWS. Additionally, some require a paid subscription.

The bottom line is that this local development approach lets new developers get started faster, keeps the environment consistent, avoids production mishaps, is easy to iterate on, and can be tracked via Git. On the other hand, generating dummy data can be difficult and as the application’s microservice dependency graph grows, individual local machines may be hard-pressed to run everything locally.

Below we detail each element of our approach. If using your own application, sub out your specific implementation details with any related to our application, including such things as Node.js, Typescript, AWS services, and environment variable names.

Clone the example project

Let’s get started by checking out our example project from GitHub. This project has been set up according to the instructions detailed in the rest of this post.

Example project: doordash-oss/local-dev-env-blog-example

In this example, our backend application has been built using TypeScript, Node.js, Express, and TypeORM. You’re not required to use any of these technologies for your own application, of course, and we won’t focus on any specifics related to them.

This example project is based on an application that exposes two REST endpoints — one for creating a note and another for retrieving one.

POST /notes

GET /notes/:noteid

When posting a note, we also send a message to an SQS queue. Currently, nothing is done with these messages in the queue, but in the future we could wire up a consumer for the queue to later asynchronously process the notes.

Install the prerequisite packages to get the example project to start. Note that these instructions are also located in the project README

- Node version >= 16.13 but <17 installed.
 - https://nodejs.org/en/download/
- Docker Desktop installed and running.
 - https://www.docker.com/products/docker-desktop/
- postgresql installed.
 - `brew install postgresql`
- awslocal installed.
 - https://docs.localstack.cloud/integrations/aws-cli/#localstack-aws-cli-awslocal
- Run npm install
 - `npm install`

Set up Docker Compose with your application

Docker Compose is a tool for defining and running multi-container Docker environments. In this case, we will run our application as one container and use a few others to simulate our production environment as accurately as possible.

1. Start by setting up the application to run via Docker Compose. First, create a Dockerfile, which describes how the container should be built.

dockerfiles/Dockerfile-api-dev

FROM public.ecr.aws/docker/library/node:lts-slim


# Create app directory
WORKDIR /home/node/app


# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY tsconfig.json ./
COPY src ./src
RUN npm install --frozen-lockfile


# We have to install these dev dependecies as regular dependencies to get hot swapping to work
RUN npm install nodemon ts-node @types/pg


# Bundle app source
COPY . .

This Dockerfile contains steps specific to Node.js. Unless you are also using Node.js and TypeORM, yours will look different. For more information on the Dockerfile spec, you can check out the Docker documentation here.

2. Next, create a docker-compose.yml file and define the application container.

docker-compose.yml

version: '3.8'


services:
 api:
   container_name: example-api
   build:
     context: .
     dockerfile: ./dockerfiles/Dockerfile-api-dev
   ports:
     - '8080:8080'
   volumes:
     - ./src:/home/node/app/src
   environment:
     - NODE_ENV=development
   command: ['npm', 'run', 'dev']

Here we have defined a service called api that will spin up a container named example-api that uses the Dockerfile we previously defined as the image. It exposes port 8080, which is the port our Express server starts on, and mounts the ./src directory to the directory /home/node/app/src. We’re also setting the NODE_ENV environment variable to development and starting the application with the command npm run dev. You can see what npm run dev does specifically by checking out that script in package.json here. In this case, we’re using a package called nodemon which will auto-restart our backend Node.js express application whenever we make a change to any TypeScript file in our src directory, a process that is called hotswapping. This isn’t necessary for your application, but it definitely speeds up the development process.

Set up a locally running database

Most backend microservices wouldn’t be complete without a database layer for persisting data. This next section will walk you through adding a PostgreSQL database locally. While we use PostgreSQL here, many other databases have Docker images available, such as CockroachDB or MySQL.

1. First, we’ll set up a PostgreSQL database to be run and connected to locally via Docker Compose.

Add a new PostgreSQL service to the docker-compose.yml file.

docker-compose.yml

postgres:
   container_name: 'postgres'
   image: public.ecr.aws/docker/library/postgres:14.3-alpine
   environment:
     - POSTGRES_USER=test
     - POSTGRES_PASSWORD=password
     - POSTGRES_DB=example
   ports:
     - '5432:5432'
   volumes:
     - ./db:/var/lib/postgresql/data
   healthcheck:
     test: ['CMD-SHELL', 'pg_isready -U test -d example']
     interval: 5s
     timeout: 5s
     retries: 5

Here we have defined a service and container called postgres. It uses the public PostgreSQL 14.3 image because we don’t need any customization. We’ve specified a few environment variables, namely the user and password needed to connect to the database and the name of the database. We’re exposing the default PostgreSQL port 5432 locally and using a local folder named db for the underlying database data. We’ve also defined a health check that checks that the example database is up and accessible.

Now we can connect our application to it by adding relevant environment variables that match our configured database credentials.

docker-compose.yml

api:
   container_name: example-api
   build:
     context: .
     dockerfile: ./dockerfiles/Dockerfile-api-dev
   ports:
     - '8080:8080'
   depends_on:
     postgres:
       condition: service_healthy
   volumes:
     - ./src:/home/node/app/src
   environment:
     - NODE_ENV=development
     - POSTGRES_USER=test
     - POSTGRES_PASSWORD=password
     - POSTGRES_DATABASE_NAME=example
     - POSTGRES_PORT=5432
     - POSTGRES_HOST=postgres
   command: ['npm', 'run', 'dev']

One interesting thing to note about connections between containers in a Docker Compose environment is that the hostname you use to connect to another container is the container's name. In this case, because we want to connect to the postgres container, we set the host environment variable to be postgres. We’ve also specified a depends_on section which tells the example-api container to wait to start up until the health check for our postgres container returns successfully. This way our application won’t try to connect to the database before it is up and running.

2. Now we’ll seed the database with some data whenever it starts up.

If you’re testing your application in any way, it’s probably useful to have a local database that always has some data. To ensure a consistent local development experience across docker-compose runs and across different developers, we can add a Docker container which runs arbitrary SQL when docker-compose starts.

To do this, we start by defining a bash script and an SQL file as shown below.

scripts/postgres-seed.sql

-- Add any commands you want to run on DB startup here.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";


CREATE TABLE IF NOT EXISTS notes (
 id         UUID NOT NULL DEFAULT uuid_generate_v4(),
 contents   varchar(450) NOT NULL,
 created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now(),
 updated_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
);


-- Since data is kept between container restarts, you probably want to delete old inserted data so that you have a known state everytime the the database starts up
DELETE FROM notes;


INSERT INTO notes (id, contents) VALUES ('6a71ff7e-577e-4991-bc70-4745b7fbbb78', 'Look at this lovely note!');

This is just a simple SQL file that creates a database table called “notes” and inserts a note into it. Note the use of IF NOT EXISTS and the DELETE, which ensure that this script will always execute successfully, whether it’s run after the database is first created or multiple times after.

scripts/local-postgres-init.sh

#!/bin/bash


export PGPASSWORD=password; psql -U test -h postgres -d example -f /scripts/postgres-seed.sql

This bash file executes our postgres-seed.sql script against our database.

Next, define the Docker service and container in docker-compose to run the script and the SQL.

docker-compose.yml

postgres-init:
   container_name: postgres-init
   image: public.ecr.aws/docker/library/postgres:14.3-alpine
   volumes:
     - './scripts:/scripts'
   entrypoint: '/bin/bash'
   command: ['/scripts/local-postgres-init.sh']
   depends_on:
     postgres:
       condition: service_healthy

This spins up a container with the name postgres-init that runs our bash script from above. Like our application, it waits to start until our database container itself is up and running.

Speaking of our application, let’s also make sure that it waits for our database to be seeded.

docker-compose.yml

api:
   container_name: example-api
   build:
     context: .
     dockerfile: ./dockerfiles/Dockerfile-api-dev
   ports:
     - '8080:8080'
   depends_on:
     postgres:
       condition: service_healthy
     postgres-init:
       condition: service_completed_successfully
   volumes:
     - ./src:/home/node/app/src
   environment:
     - NODE_ENV=development
     - POSTGRES_USER=test
     - POSTGRES_PASSWORD=password
     - POSTGRES_DATABASE_NAME=example
     - POSTGRES_PORT=5432
     - POSTGRES_HOST=postgres
   command: ['npm', 'run', 'dev']

Set up LocalStack

If you’re taking full advantage of AWS, your local development environment likely wouldn’t be complete without access to the AWS services you rely on — or at least mocks of them. LocalStack lets you run many of your AWS resources locally alongside your application, ensuring test data is always separated from the rest of your team while maintaining an application environment that’s as close to prod as possible.

1. First, set up LocalStack to run with Docker Compose.

Just like our database or application, we define a LocalStack service and container in our docker-compose.yml file. The configuration we’re using is based on the recommended configuration from LocalStack.

docker-compose.yml

localstack:
   container_name: 'localstack'
   image: localstack/localstack
   ports:
     - '4566:4566'
   environment:
     - DOCKER_HOST=unix:///var/run/docker.sock
   volumes:
     - '${TMPDIR:-/tmp}/localstack:/var/lib/localstack'
     - '/var/run/docker.sock:/var/run/docker.sock'

Here we’ve defined a service named localstack with a container named localstack. It uses the publicly available LocalStack image and exposes port 4566, which is the default port LocalStack runs on. Per their config suggestions, we set an environment variable that connects LocalStack to Docker and a couple of volumes, one of which is required for Docker connectivity while the other specifies where LocalStack should store its data.

2. Now that you have LocalStack running alongside your application, we can create some AWS resources with which your application can interact.

This can be done manually by using the LocalStack CLI: 

awslocal s3api create-bucket --bucket my-test-bucket        

awslocal s3api list-buckets
{
    "Buckets": [
        {
            "Name": "my-test-bucket",
            "CreationDate": "2022-12-02T21:53:24.000Z"
        }
    ],
    "Owner": {
        "DisplayName": "webfile",
        "ID": "bcaf1ffd86f41161ca5fb16fd081034f"
    }
}

For more information on commands, see the AWS CLI v1 wiki and the LocalStack docs on AWS service feature coverage. Instead of using aws, you just use awslocal.

Let’s also make sure our application doesn’t try to start up without LocalStack already running.

docker-compose.yml

api:
   container_name: example-api
   build:
     context: .
     dockerfile: ./dockerfiles/Dockerfile-api-dev
   ports:
     - '8080:8080'
   depends_on:
     localstack:
       condition: service_started
     postgres:
       condition: service_healthy
     postgres-init:
       condition: service_completed_successfully
   volumes:
     - ./src:/home/node/app/src
   environment:
     - NODE_ENV=development
     - POSTGRES_USER=test
     - POSTGRES_PASSWORD=password
     - POSTGRES_DATABASE_NAME=example
     - POSTGRES_PORT=5432
     - POSTGRES_HOST=postgres
     - AWS_REGION=us-west-2
     - AWS_ACCESS_KEY_ID=fake
     - AWS_SECRET_ACCESS_KEY=fake
     - SQS_NOTES_QUEUE_URL=http://localstack:4566/000000000000/notes-queue
   command: ['npm', 'run', 'dev']

Set up Terraform

While it’s great to be able to create AWS resources on the fly for your application locally, you probably have some resources you want to start up every single time with your application. Terraform is a good tool to ensure a consistent and reproducible AWS infrastructure.

1. To start, define the infrastructure in Terraform.

We’re going to define our infrastructure in a stock standard .tf file. The only difference is that we need to specify that the AWS endpoint we want to interact with is actually LocalStack. 

Let’s add a queue.

terraform/localstack.tf

provider "aws" {
 region                      = "us-west-2"
 access_key                  = "test"
 secret_key                  = "test"


 skip_credentials_validation = true
 skip_requesting_account_id  = true
 skip_metadata_api_check     = true


 endpoints {
   sqs = "http://localstack:4566"
 }
}


resource "aws_sqs_queue" "queue" {
 name = "notes-queue"
}

Here we’ve set up a very basic Terraform configuration for AWS resources. All the values in the provider section should stay as-is except for the region, which is up to you. Just remember that your application will need to use the same region. You can see we set up an SQS Queue called “notes-queue” and we’ve made sure to set the SQS endpoint to localstack.

2. Continuing the theme of automation via Docker Compose, we can now use Docker to automatically apply our Terraform config on startup.

Let’s create a new Docker-based service+container in our docker-compose.yml file with a Dockerfile that installs Terraform and the AWS CLI, and then runs Terraform to create our resources. Yes, you heard that correctly. This container is going to run Docker itself (Docker-ception!). More on that in a second.

First, we need our Dockerfile. Looks complicated but it just involves these straightforward steps.

  1. Install required prerequisites.
  2. Install AWS CLI.
  3. Install Terraform.
  4. Copy our local script, which runs Terraform, onto the container image.
  5. Have the image run our Terraform script when the container starts up.

dockerfiles/Dockerfile-localstack-terraform-provision

FROM docker:20.10.10


RUN apk update && \
   apk upgrade && \
   apk add --no-cache bash wget unzip


# Install AWS CLI
RUN echo -e 'http://dl-cdn.alpinelinux.org/alpine/edge/main\nhttp://dl-cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-cdn.alpinelinux.org/alpine/edge/testing' > /etc/apk/repositories && \
   wget "s3.amazonaws.com/aws-cli/awscli-bundle.zip" -O "awscli-bundle.zip" && \
   unzip awscli-bundle.zip && \
   apk add --update groff less python3 curl && \
   ln -s /usr/bin/python3 /usr/bin/python && \
   rm /var/cache/apk/* && \
   ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \
   rm awscli-bundle.zip && \
   rm -rf awscli-bundle


# Install Terraform
RUN wget https://releases.hashicorp.com/terraform/1.1.3/terraform_1.1.3_linux_amd64.zip \
 && unzip terraform_1.1.3_linux_amd64 \
 && mv terraform /usr/local/bin/terraform \
 && chmod +x /usr/local/bin/terraform


RUN mkdir -p /terraform
WORKDIR /terraform


COPY scripts/localstack-terraform-provision.sh /localstack-terraform-provision.sh


CMD ["/bin/bash", "/localstack-terraform-provision.sh"]

Now we have to set up the corresponding Docker Compose service and container.

docker.compose.yml

localstack-terraform-provision:
   build:
     context: .
     dockerfile: ./dockerfiles/Dockerfile-localstack-terraform-provision
   volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     - ./terraform:/terraform
     - ./scripts:/scripts

This points to that Dockerfile we just created and makes sure the container has access to the running instance of Docker, as well as the Terraform and scripts directories.

Next, we need to create the aforementioned shell script.

scripts/localstack-terraform-provision.sh

#!/bin/bash


(docker events --filter 'event=create'  --filter 'event=start' --filter 'type=container' --filter 'container=localstack' --format '{{.Actor.Attributes.name}} {{.Status}}' &) | while read event_info


do
   event_infos=($event_info)
   container_name=${event_infos[0]}
   event=${event_infos[1]}


   echo "$container_name: status = ${event}"


   if [[ $event == "start" ]]; then
       sleep 10 # give localstack some time to start
       terraform init
       terraform apply --auto-approve
       echo "The terraform configuration has been applied."
       pkill -f "docker event.*"
   fi
done

This script first runs a Docker CLI command that waits until it sees a Docker event, indicating that the LocalStack container has started up successfully. We do this so that we don’t try to run Terraform without having LocalStack accessible. You can imagine how it might be hard to create an SQS queue if SQS for all intents and purposes didn’t exist.

It may be a confusing move, but we’re also going to make sure our localstack container waits for our localstack-terraform-provision container to start up. This way we guarantee that the localstack-terraform-provision container is up and watching for LocalStack to be up before LocalStack itself tries to start. If we don’t do this, it’s possible that our localstack-terraform-provision container would miss the start event from our localstack container.

docker.compose.yml

localstack:
   container_name: 'localstack'
   image: localstack/localstack
   ports:
     - '4566:4566'
   environment:
     - DOCKER_HOST=unix:///var/run/docker.sock
   volumes:
     - '${TMPDIR:-/tmp}/localstack:/var/lib/localstack'
     - '/var/run/docker.sock:/var/run/docker.sock'
   Depends_on:
     # We wait for localstack-terraform-provision container to start
     # so that it can watch for this localstack container to be ready
     - localstack-terraform-provision

Finally, we make sure our application doesn’t start until we’ve finished executing our Terraform.

docker-compose.yml

api:
   container_name: example-api
   build:
     context: .
     dockerfile: ./dockerfiles/Dockerfile-api-dev
   ports:
     - '8080:8080'
   depends_on:
     localstack:
       condition: service_started
     localstack-terraform-provision:
       condition: service_completed_successfully
     postgres:
       condition: service_healthy
     postgres-init:
       condition: service_completed_successfully
   volumes:
     - ./src:/home/node/app/src
   environment:
     - NODE_ENV=development
     - POSTGRES_USER=test
     - POSTGRES_PASSWORD=password
     - POSTGRES_DATABASE_NAME=example
     - POSTGRES_PORT=5432
     - POSTGRES_HOST=postgres
     - AWS_REGION=us-west-2
     - AWS_ACCESS_KEY_ID=fake
     - AWS_SECRET_ACCESS_KEY=fake
     - SQS_NOTES_QUEUE_URL=http://localstack:4566/000000000000/notes-queue
   command: ['npm', 'run', 'dev']

Starting up your local development environment 

If you’ve followed along and have your application set up accordingly, or you’re just playing around with our example project, you should be ready to start everything up and watch the magic!

To start up Docker Compose, just run docker-compose up.

You should see that all required images are downloaded, containers created and started, and everything running in the startup order we’ve defined via depends_on. Finally, you should see your application become available. In our case with the example project, this looks like

example-api                 | Running on http://0.0.0.0:8080

There will be a folder called db created with some files inside of it; this is essentially your running database. You’ll also see some more files in your Terraform folder. These are the files Terraform uses to understand the state of your AWS resources.

We’ll have a database running that is seeded with some data. In our case, we added a table called notes and a note. You can verify this locally by using a tool like psql to connect to your database and query it like this:

export PGPASSWORD=password; psql -U test -h localhost -d example

select * from notes;

                  id                  |         contents          |         created_at         |         updated_at
--------------------------------------+---------------------------+----------------------------+----------------------------
 6a71ff7e-577e-4991-bc70-4745b7fbbb78 | Look at this lovely note! | 2022-12-02 17:08:36.243954 | 2022-12-02 17:08:36.243954

Note that we’re using a host of localhost and not postgres as we would use within our docker-compose environment.

Now try calling the application.

curl -H "Content-Type: application/json" \
  -d '{"contents":"This is my test note!"}' \
  "http://127.0.0.1:8080/notes"

If we check back in our database, we should see that that note should have been created.

id                  |         contents          |         created_at         |         updated_at

--------------------------------------+---------------------------+----------------------------+----------------------------

 6a71ff7e-577e-4991-bc70-4745b7fbbb78 | Look at this lovely note! | 2022-12-05 16:59:03.108637 | 2022-12-05 16:59:03.108637

 a223103a-bb24-491b-b3c6-8690bc852ec9 | This is my test note!     | 2022-12-05 17:26:33.845654 | 2022-12-05 17:26:33.845654

We can also inspect the SQS queue to see that a corresponding message is waiting to be processed.

awslocal sqs receive-message --region us-west-2 --queue-url \
  http://localstack:4566/000000000000/notes-queue

{
    "Messages": [
        {
            "MessageId": "0917d626-a85b-4772-b6fe-49babddeca76",
            "ReceiptHandle": "NjA5OWUwOTktODMxNC00YjhjLWJkM",
            "MD5OfBody": "73757bf6dfcc3980d48acbbb7be3d780",
            "Body": "{\"id\":\"a223103a-bb24-491b-b3c6-8690bc852ec9\",\"contents\":\"This is my test note!\"}"
        }
    ]
}

Note that the default localstack AWS account id is 000000000000.

Finally, we can also call our GET endpoint to fetch this note.

curl -H "Content-Type: application/json" "http://127.0.0.1:8080/notes/a223103a-bb24-491b-b3c6-8690bc852ec9"


{
  "id":"a223103a-bb24-491b-b3c6-8690bc852ec9",
  "contents":"This is my test note!",
  "createdAt":"2022-12-05T17:26:33.845Z",
  "updatedAt":"2022-12-05T17:26:33.845Z"
}

Conclusion

When developing cloud software as a part of a team, it is often not practical or convenient for each person to have a dedicated cloud environment for local development testing purposes. Teams would need to keep all of their personal cloud infrastructure in sync with the production cloud infrastructure, making it easy for things to become stale and/or drift. It is also not practical to share the same dedicated cloud environment for local development testing because changes being tested may conflict and cause unexpected behavior. At the same time, you want the local development environment to be as close to production as possible. Developing on production itself can be slow, is not always feasible given possible data sensitivity concerns, and can also be tricky to set up in a safe manner. These are all tough requirements to fuse together.

Ideally, if you’ve followed along with this guide, you’ll now have an application with a local development environment that solves these requirements — no matter the backend application language or microservice framework! While this is mostly tailored to Postgres, it’s possible to wire this up with any other database technology that can be run as a Docker container. We hope this guide helps you and your team members to iterate quickly and confidently on your product without stepping on each other's toes.

Related Jobs

Location
San Francisco, CA; Sunnyvale, CA; Seattle, WA
Department
Engineering
Location
San Francisco, CA; Sunnyvale, CA
Department
Engineering
Location
San Francisco, CA; Sunnyvale, CA; Los Angeles, CA; Seattle, WA; New York, NY
Department
Engineering
Location
Sao Paulo, Brazil
Department
Engineering
Location
San Francisco, CA; Sunnyvale, CA; Los Angeles, CA; Seattle, WA
Department
Engineering