The Stack That Helped Opendoor Buy and Sell Over $1B in Homes

Written by opendoor | Published 2017/03/17
Tech Story Tags: real-estate | techology | data-science | startup | stackshare

TLDRvia the TL;DR App

Originally posted on StackShare

About Opendoor

Unless you’re in San Francisco or New York, selling your home is a giant headache that typically lasts three months. Opendoor removes the headache — go online, ask for an offer, answer a few questions and we’ll buy your home directly from you. We’ll take it from there and deal with selling the home to another buyer while you can go on with your life.

Right now we operate in Phoenix, Dallas, and Las Vegas. We’ve completed over 4,800 real estate transactions — over $1B in homes. For a company about to turn 3 years old, it’s pretty crazy how far we’ve come.

There’s a lot that goes into one real estate transaction. First, there’s what you might consider our core engineering challenge: making an accurate offer on each home. If we offer too much, we’ll lose money and go out of business; if we offer too little, we’ll seem like scammers and offend our customers.

After we buy the home, we’ll work with contractors to do any necessary repairs and touch-ups, then put the home on the market and find a buyer. Since we own every home, we can do clever things like putting smart locks on all the doors and offering all-day open houses.

I’m a frontend engineer, and mainly like to work on the consumer-facing website. I’m currently working on improving the experience for first-time home buyers. The process can be really scary for people who don’t know anything about real estate.

Engineering Organization

Our team is split between product engineering and data science: the tools used by each team are different enough that the teams work in separate code bases. Of course, the resulting product has to be well-integrated, and the product team pulls a lot of data from data science APIs. This coordination is tricky to get right; Kevin Teh from the data science team wrote about it in some detail in a recent post.

At first, we split the product team into “customer-facing” and “internal tools” groups. It was nice to have all the frontend engineers on the same team, but we noticed that some projects didn’t have clear owners. For example, our buyer support team uses some internal tools we’ve built. Should those tools be developed by the “internal tools” team, or is support part of the customer experience?

Now the team is split into cross-functional teams based around parts of the business. The Seller team handles people selling to us; the Homes team handles renovations and inventory; and the Buyer team puts our homes on the market and finds buyers.

As we grow, the lines between teams often get blurry, so we expect that the structure will always be evolving. It’s common for engineers to move between teams, including between the product and data science teams.

Product Architecture

We started in 2014 with a Ruby on Rails monolith and Angular frontend, both of which were good ways to move fast while we were very small.

The MVP of our customer-facing product was a multi-page form where you could enter information about your home to get an offer, but that was just the tip of the iceberg. We had to build internal tools to help our team correctly price homes and manage the transaction process. We used Angular and Bootstrap to build out those tools; the main goal was to add features quickly, without fiddling around with CSS — in fact, without requiring any frontend experience at all.

We use Puma as our webserver, and Postgres for our database — one big benefit is the PostGIS extension for location data. Sidekiq runs our asynchronous jobs with support from Redis. Elasticsearch shows up everywhere in our internal tools. We use Webpack to build our frontend apps, and serve them using the Rails Asset Pipeline.

We use Imgix to store photos of our homes, as well as most of the icons and illustrations around our site. We mainly use Imgix’s auto-resizing feature, so we never lose track of our original images, but can later load images of appropriate size for each context on the frontend.

Monolith to Microservices

Where appropriate, we try to break isolated logic out into microservices. For example, we’re working on a service which calculates our projected costs and fees. Our cost structure changes frequently, and we want to estimate how policy changes might affect our fees. This code wasn’t a great fit for the Rails app because we wanted it to be accessible to our analysts and data scientists as well.

We’ve now split this logic out into its own service. It uses a version-history-aware computation graph to calculate and back-test our internal costs, and (soon!) will come with its own React frontend to visualize those calculations.

Our data science stack is also a fully separate set of services, so there’s a lot of inter-app communication going on. To let these services authenticate to one another, we use an Elixir app called Paladin. Opendoor engineer Dan Neighman wrote and open-sourced Paladin, and explains why it’s helpful in this blog post. Authentication is based on JWTs provided by Warden and Guardian.

Data Science Architecture

I’ve always found data science at Opendoor interesting because it’s not the “grab as much data as you possibly can, then process it at huge scale” problem I’m used to hearing about.

To find the price of a house, you look at nearby homes that sold recently, then squeeze as much information out of that data as you possibly can by comparing it to what you know about the market as a whole. Our co-founder Ian Wong has a more in-depth talk here.

We can group most of the data science work into several core areas:

  1. Ingesting and organizing data from a variety of sources
  2. Training machine learning models to predict home value and market risk
  3. Quantifying and mitigating various forms of risk, including macroeconomic and individual house-liquidity
  4. Collecting information in a data warehouse to empower the analytics team

For data ingestion, we pull from a variety of sources (like tax record and assessor data). We dump most of this data into an RDS Postgres database. We also transform and normalize everything at this phase — we’re importing dirty data from sources that often conflict. This blog post goes into more detail on how we merge data for a given address.

For our machine learning model, we use Python with building blocks from SqlAlchemy, scikit-learn, and Pandas. We use Flask for routing/handling requests. We use Docker to build images and Kubernetes for deployment and scaling. Our system lets us describe a model as a JSON configuration, and once deployed, the system automatically grabs the required features, trains the model, and evaluates how well the model did against performance metrics. This automation lets us iterate really fast.

We’re starting to use Dask for feature fetching and processing. Other companies often use Spark and Hadoop for this, but we need support for more complex parallel algorithms. Dask’s comparison to PySpark post describes this perfectly:

Dask is lighter weight and is easier to integrate into existing code and hardware. If your problems vary beyond typical ETL + SQL and you want to add flexible parallelism to existing solutions then dask may be a good fit, especially if you are already using Python and associated libraries like NumPy and Pandas.

The final piece of our data science architecture is the Data Warehouse, which we use to collect analytics data from everywhere we can. For a long time we used a nightly pg_dump to move Postgres data from each service’s database directly into a home-built Data Warehouse. We recently migrated to Google’s BigQuery instead. BigQuery is faster, and lets us fit more data into each query, but the killer feature is that it’s serverless. We have many people running queries at “peak hours”, and don’t want things to slow down just because we have a preallocated number of servers available.

High-Tech Open Houses

Since Opendoor actually owns all the houses we sell, we can be creative about how we show them to potential buyers.

Typically, if you want to see a house for sale, you have to call the listing agent and schedule a time. We realized early on that we could make open houses way more convenient by installing automatic locks on our doors so the homes could be accessed at any time. For version 0 of the project, we literally posted our VP of Product’s phone number on the doors of all our houses — buyers would call in, and he’d tell them the unlock code.

For version 1, we added Twilio so we could automatically send unlock codes over SMS. For version 2, we built a mobile app.

Customers expect a good mobile experience these days, but our all-day open house feature made it twice as important. You can use the app to find nearby homes as you’re driving around, and explore them on a whim — a huge improvement from the traditional process!

We built our app in React Native. A major part of that choice was pragmatic — our team had a lot of experience with web technologies, and almost no experience with native technologies. We also wanted to support both iPhone and Android from early on, and React Native let us do that (we released our app for iPhone first, and adding Android only took an extra couple weeks).

Not everyone wants to install an app, so it’s still possible to access our homes via SMS. We’ve added a few security mechanisms — one worth mentioning is Blockscore, which lets us quickly run identity verification using phone numbers. For riskier numbers, we disable the automatic entry system and have our support team call the customer to collect their information.

Tools and Workflows

We manage our repositories and do code reviews on GitHub. All code is reviewed by at least one other engineer, but once it’s in the master branch, it’s assumed to be ready to deploy. If you want to deploy your code, you can do it in three steps:

  1. ./bin/deploy staging
  2. Check your work on staging
  3. ./bin/deploy production

This takes 10–15 minutes in total. We’ve worked hard to automate the process so we can move fast as a team. We use Heroku for hosting, and run automated tests on CircleCI. Slack bots report what’s being deployed.

There are a lot of external services we rely on heavily. To run through them briefly: Help Scout and Dyn for emails; Talkdesk and Twilio for calls and customer service; HelloSign for online contract signing; New Relic and Papertrail for system monitoring; Sentry for error reporting.

For analytics, we’ve used a lot of tools: Mixpanel for the web, Amplitude for mobile, Heap for retroactive event tracking. We mainly use Looker for digging into that data and making dashboards.

Joining Opendoor Engineering

Opendoor has a very entrepreneurial, pragmatic culture: Engineers here typically talk with customers, understand their needs, and take the initiative on projects. We’re big on ownership and empowering others, and are aggressively anti-snark.

We’re looking for engineers of all backgrounds: it doesn’t matter what languages you work with now, we’re sure you’ll ramp up fast.

Find out more about Opendoor jobs on StackShare or on our careers site.

Huge thanks to Kevin Teh, Mike Chen, Nelson Ray, Ian Wong, and Alexey Komissarouk for their help putting together this post.


Published by HackerNoon on 2017/03/17