Junction Logo

Junction

Data Engineer

Reposted 12 Days Ago
Remote
16 Locations
160K-200K Annually
Mid level
Remote
16 Locations
160K-200K Annually
Mid level
The Data Engineer will build and scale data pipelines for healthcare data, ensuring reliable data flow and supporting various analytics and product development needs.
The summary above was generated by AI

Healthcare is in crisis and the people behind the results deserve better. With data exploding across wearables, lab tests, and patient–doctor interactions, we’re entering an era where data is abundant.

Junction is building the infrastructure layer for diagnostic healthcare, making patient data accessible, actionable, and automated across labs and devices. Our mission is simple but ambitious: use health data to unlock unprecedented insight into human health and disease.

If you're passionate about how technology can supercharge healthcare, you’ll fit right in.

Backed by Creandum, Point Nine, 20VC, YC, and leading angels, we’re working to solve one of the biggest challenges of our time: making healthcare personalized, proactive, and affordable. We’re already connecting millions and scaling fast.

Short on time?

  • Who you are: A data engineer with solid software engineering fundamentals who can build, own, and scale reliable data pipelines and warehouse infrastructure.

  • Ownership: You’ll shape our data foundation from ingestion through transformation — and make it analytics-ready at scale.

  • Salary: $160K - $200k+ equity

  • Time zone: Preferably NYC; EST required.

Why we need you

Junction powers modern diagnostics at scale and as we grow, our platform is becoming increasingly data-intensive. The way we move, structure, and surface data directly affects our ability to support customers, deliver real-time insights, and unlock the next generation of diagnostics products.

We’re hiring our first Data Engineer to take ownership of that foundation.

  • Build and run pipelines that turn raw, messy healthcare data into clean, trusted, usable information

  • Power customer products, internal analytics, and the AI models behind our next wave of diagnostics

  • Design how data flows through an entire diagnostics ecosystem — not just maintain ETLs

  • Build scalable, cloud-native pipelines on GCP and eliminate bottlenecks as we scale

  • Hunt down edge cases, build guardrails for quality, and ship systems other engineers rely on daily

If you love untangling complexity and building data systems that truly make an impact, you’ll fit right in — and the systems you build will unlock new products and accelerate everything we ship.

What you’ll be doing day to day

  • Designing and operating ingestion, transformation, and replication pipelines on GCP

  • Managing orchestration and streamlining ELT/ETL workflows (e.g., Temporal)

  • Creating clean, scalable, analytics-ready schemas in BigQuery

  • Implementing monitoring, alerting, testing, and observability across data flows

  • Integrating data from APIs, operational databases, and unstructured sources

  • Collaborating with product, engineering, analytics, and compliance on secure, high-quality data delivery

Requirements

  • Solid engineering fundamentals and experience building pipelines from scratch

  • Python and SQL fluency; comfortable across relational + NoSQL systems

  • Experience with orchestrators like Temporal, Airflow, or Dagster

  • Hands-on with BigQuery, BigTable, and core GCP data tooling

  • Ability to turn messy, ambiguous data problems into clear, scalable solutions

  • Startup or small-team experience; comfortable moving fast with ownership

  • Communication skills, attention to detail, and a bias toward clarity and reliability

You don’t need to tick every box to fit in here. If the problems we’re solving genuinely interest you and you know you can contribute, we’d love to talk.

Nice to have

  • Experience with HIPAA/PHI or regulated healthcare data

  • Background with time-series data or event-driven architectures

  • Familiarity with dbt or similar transformation frameworks

  • Experience with healthcare, diagnostics, or ML/AI workloads

How you'll be compensated

  • Salary: $160K - $200k + early stage options

  • Your salary is dependant on your location and experience level, generated by our salary calculator. Read more in our handbook here.

  • Generous early stage options (extended exercise post 2 years employment) - you will receive 3 offers based on how much equity you'd like

  • Regular in person offsites, last were in Morocco and Tenerife

  • Bi-weekly team happy hours & events remotely

  • Monthly learning budget of $300 for personal development/productivity

  • Flexible, remote-first working - including $1K for home office equipment

  • 25 days off a year + national holidays

  • Healthcare cover depending on location

Oh and before we forget:

  • Backend Stack: Python (FastAPI), Go, PostgreSQL, Google Cloud Platform (Cloud Run, GKE, Cloud BigTable, etc), Temporal Cloud

  • Frontend Stack: TypeScript, Next.js

  • API docs are here: https://docs.junction.com/

  • Company handbook is here with engineering values + principles

Important details before applying:

  • We only hire folks physically based in GMT and EST timezones - more information here.

  • We do not sponsor visas right now given our stage

Top Skills

Airflow
BigQuery
Bigtable
Dagster
Fastapi
GCP
Next.Js
Postgres
Python
SQL
Temporal
Typescript

Similar Jobs

Yesterday
Remote or Hybrid
United States
60K-160K Annually
Senior level
60K-160K Annually
Senior level
Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
The Data Engineer will design and implement data pipelines, manage Looker, automate deployment, and ensure data quality for business reporting needs.
Top Skills: BashBigQueryCi/CdDebeziumGitGoogle Cloud PlatformHelmKafkaKubernetesLookerPub/SubPythonSQLTerraform
4 Days Ago
Remote or Hybrid
2 Locations
175K-195K Annually
Senior level
175K-195K Annually
Senior level
Artificial Intelligence • Other • Security • Software • Analytics • Big Data Analytics
Lead the architecture and development of a scalable, cloud-based intelligent analytics platform focused on video and sensor data for AI applications. Responsibilities include designing data pipelines, maintaining cloud infrastructure, and deploying AI models.
Top Skills: AIDockerGoGoogle Cloud PlatformIcebergKubernetesPythonSparkTerraform
4 Days Ago
Remote or Hybrid
2 Locations
175K-195K Annually
Senior level
175K-195K Annually
Senior level
Artificial Intelligence • Other • Security • Software • Analytics • Big Data Analytics
Lead the design and implementation of distributed data systems for cloud applications, mentor engineers, and enhance software delivery with automation.
Top Skills: Distributed Columnar DatabasesEltETLGoogle Pub/SubKafkaTime Series Databases

What you need to know about the Charlotte Tech Scene

Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.

Key Facts About Charlotte Tech

  • Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
  • Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
  • Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
  • Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account