← Back to all jobs

Senior Software Engineer, Data Compute

Robinhood logo

Robinhood

📍 Bellevue, WA (Hybrid)💰$196k - $230k🕐 Posted Today
Data EngineerHybridOnsiteRemote
sparkairflowdatabricksdelta-laketrinos3parquetunity-catalog
Apply

Job Description

About Us

Our mission is to democratize finance for all. An estimated $124 trillion of assets will be inherited by younger generations in the next two decades—the largest transfer of wealth in human history. We are building an elite team, applying frontier technologies to the world's biggest financial problems. We're looking for bold thinkers, sharp problem-solvers, and builders who are wired to make an impact. Robinhood is where ambitious people do the best work of their careers. We're a high-performing, fast-moving team with ethics at the center of everything we do. Expectations are high, and so are the rewards.

About the Role

The Data Compute team is a foundational infrastructure group at Robinhood, responsible for managing and evolving the company's large-scale Spark and Airflow environments. This team serves as a platform provider for all of Robinhood engineering, enabling everything from real-time analytics to critical compliance and operations workflows. We are currently leading major modernization efforts, migrating workloads to Databricks, adopting serverless patterns, and optimizing our lakehouse fundamentals. The team focuses on high reliability, cost efficiency, and delivering an exceptional developer experience for our internal customers.

As a Senior Software Engineer on the Data Compute team, you will be a key builder of our core ingestion and compute primitives. You will design and implement scalable infrastructure that supports millions of daily jobs while modernizing our platform onto Delta Lake and Unity Catalog. Your work will directly impact how data is processed across the entire company, from product engineering to analytics. You'll partner with engineering leaders to drive technical direction and ensure our systems meet the highest standards for performance and governance. This is a chance to define the next generation of data processing at Robinhood.

This role is based in our Bellevue, WA office, with in-person attendance expected at least 3 days per week.

At Robinhood, we believe in the power of in-person work to accelerate progress, spark innovation, and strengthen community. Our office experience is intentional, energizing, and designed to fully support high-performing teams.

Responsibilities

  • Design and build scalable platform primitives for Spark and Airflow to support Robinhood's global data infrastructure needs.
  • Lead the migration and modernization of Spark workloads to serverless Databricks and Delta Lake architectures.
  • Optimize compute resource utilization and efficiency to manage costs across large-scale distributed systems.
  • Collaborate with internal teams across analytics and product engineering to deliver a seamless, self-serve data processing experience.
  • Improve platform reliability and governance by implementing advanced metadata management and access controls via Unity Catalog and Trino.

Requirements

  • Extensive experience with large-scale Spark and Databricks or similar platform infrastructure.
  • Deep expertise in data orchestration using Airflow for complex job lifecycle management.
  • Proven track record with lakehouse fundamentals, including S3-based data lakes and table/storage formats such as Delta Lake and Parquet.
  • Familiarity with query and serving infrastructure such as Trino, Pinot, or Hive Metastore.
  • Ability to own multi-team platform reliability, including cost optimization and developer experience initiatives.

Benefits

  • Challenging, high-impact work to grow your career
  • Performance driven compensation with multipliers for outsized impact, bonus programs, equity ownership, and 401(k) matching
  • Top tier benefits to fuel your work, including 100% paid health insurance for employees with 90% coverage for dependents
  • Access to the best AI tools on the market and continuous AI skill-building for every employee, technical or not
  • Lifestyle wallet—a highly flexible benefits spending account for wellness, learning, and more
  • Employer-paid life and disability insurance, fertility benefits, and mental health benefits
  • Time off to recharge including company holidays, paid time off, sick time, parental leave, and more
  • Exceptional office experience with catered meals, events, and comfortable workspaces

Compensation

In addition to the base pay range listed below, this role is also eligible for bonus opportunities, equity, and benefits.

Base pay for the successful applicant will depend on a variety of job-related factors, which may include education, training, experience, location, business needs, or market demands. The expected base pay range for this role is based on the location where the work will be performed and is aligned to one of three compensation zones. For other locations not listed, compensation can be discussed with your recruiter during the interview process.

Zone 1 (Menlo Park, CA; New York, NY; Bellevue, WA; Washington, DC): $196,000–$230,000 USD

Zone 2 (Denver, CO; Westlake, TX; Chicago, IL): $172,000–$202,000 USD

Zone 3 (Lake Mary, FL; Clearwater, FL; Gainesville, FL): $153,000–$179,000 USD

Unchain Data provides Web3 data job aggregation as a common good. Jobs are posted by third parties and are not individually verified. Always exercise caution: never download software requested during a hiring process, avoid clicking unfamiliar links in interviews, make sure to verify URLs are legit, and use trusted meeting tools like Google Meet or Zoom.

Hiring Web3 data talent?

Get expert help sourcing, evaluating, and onboarding data professionals.