The Data Reliability Engineer will enhance the resilience and scalability of data infrastructure, focusing on automation and reliability. Responsibilities include managing data pipelines, operating Kubernetes clusters, and defining observability standards.
Where You Come In
As our models scale to "omni" capabilities, our data infrastructure must be unbreakable. We are looking for a Data Reliability Engineer who brings a Site Reliability Engineering (SRE) mindset to the world of massive-scale data. You will be responsible for the resilience, automation, and scalability of the petabyte-scale pipelines that feed our research. This is not just about keeping the lights on; it’s about treating infrastructure as code and building self-healing data systems that allow our researchers to train on massive datasets without interruption. Whether you are a junior engineer with a passion for automation or a seasoned SRE veteran, you will play a critical role in hardening the backbone of Luma’s intelligence.
What You'll Do
- Automate Everything: Apply Infrastructure-as-Code (IaC) principles using Terraform to provision, manage, and scale our data infrastructure.
- Harden Data Pipelines: Build reliability and fault tolerance into our core data ingestion and processing workflows, ensuring high availability for research jobs.
- Scale Kubernetes & Ray: Operate and optimize large-scale Kubernetes clusters and Ray deployments to handle bursty, high-throughput workloads.
- Define Reliability: Establish Service Level Objectives (SLOs) and observability standards (Prometheus/Grafana) for our data platforms.
- Debug & Heal: serve as the first line of defense for complex infrastructure failures, diagnosing root causes in distributed storage and compute systems.
Who You Are
- Deep SRE/DevOps proficiency: You live and breathe Linux, networking, and automation.
- Infrastructure-as-Code Native: You have extensive experience with Terraform, Ansible, or similar tools to manage complex cloud environments (AWS/GCP).
- Kubernetes Expert: You have managed Kubernetes in production and understand its internals, not just how to deploy containers.
- Python Proficiency: You can write high-quality Python code for automation, tooling, and infrastructure management.
- Data-Minded: You understand the specific challenges of stateful data systems and high-throughput storage (S3/Object Store).
What Sets You Apart (Bonus Points)
- Experience managing GPU clusters or AI/ML workloads.
- Background in both Software Engineering and Operations (DevOps).
- Experience with high-performance networking (InfiniBand/RDMA).
The base pay range for this role is $170,000 – $360,000 per year.
About LumaLuma’s mission is to build unified general intelligence that can generate, understand, and operate in the physical world.
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Similar Jobs
Big Data • Cloud • Software • Database
The Senior Product Manager will oversee the strategy and execution of MongoDB's replication and storage engines ensuring system reliability and performance for critical workloads.
Top Skills:
DatabasesDistributed SystemsMongoDB
Cloud • Greentech • Social Impact • Software • Consulting
The Associate Project Manager leads implementation projects, manages schedules and costs, coordinates teams, ensures quality, and reports project status to clients, focusing on delivering software solutions.
Top Skills:
AsanaMS OfficeMs ProjectWorkfront
Productivity • Software • App development • Automation
Lead customer engagements to address open-source license compliance issues, particularly AGPL, collaborating with Sales and technical teams for remediation.
Top Skills:
Salesforce
What you need to know about the Charlotte Tech Scene
Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.
Key Facts About Charlotte Tech
- Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
- Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
- Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
- Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus



