As an ML Engineer at Luma AI, you will design high-performance model serving pipelines, manage GPU resources, and optimize CI/CD for large-scale AI systems.
About Luma AI
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Where You Come In
This is a rare opportunity to build the foundational infrastructure that powers our large-scale multimodal models. We believe that reliable, high-performance infrastructure is the single biggest differentiating factor between success and failure in achieving our mission. You will be a foundational member of the team, designing the critical systems that allow us to train and serve next-generation AI to millions of users.
What You'll Do
This is a 0-to-1 opportunity, not a maintenance role. You will have massive ownership to:
- Architect end-to-end model serving pipelines and integrate new model architectures from our research team into our core, high-throughput inference engine.
- Build robust and sophisticated scheduling systems to manage jobs based on cluster availability and user priority, ensuring we optimally leverage thousands of expensive GPU resources.
- Design and implement dynamic, traffic-based systems for hotswapping models on our GPU workers to maximize fleet efficiency and meet product SLOs.
- Own the end-to-end CI/CD pipelines, including creating a resilient artifact store to manage all model checkpoints across multiple versions and providers.
- Develop and maintain user-friendly APIs and interaction patterns that empower our product and research teams to ship groundbreaking features at high velocity.
- Manage and optimize our complex inference workloads at scale, operating across multiple clusters and hardware providers.
Who You Are
We are looking for a world-class builder who has a proven history of creating and managing large-scale, high-performance systems. You are a non-negotiable fit if you have:
- 5+ years of professional engineering experience with deep, hands-on proficiency in Python and complex distributed systems architecture.
- Extensive, practical experience building and managing systems at scale, specifically with queues, scheduling, traffic-control, and fleet management.
- Deep expertise in our core infrastructure stack: Linux, Docker, and Kubernetes.
- Strong experience with Redis, S3-compatible storage, and public cloud platforms (AWS).
What Sets You Apart (Bonus Points)
You'll stand out as an exceptional candidate if you also bring:
- Experience with high-performance, large-scale ML systems (managing >100 GPUs).
- Deep familiarity with PyTorch and CUDA.
- Experience with modern networking stacks, including RDMA (RoCE, Infiniband, NVLink).
- Familiarity with FFmpeg and multimedia processing pipelines.
The base pay range for this role is $187,500 – $395,000 per year.
About LumaLuma’s mission is to build unified general intelligence that can generate, understand, and operate in the physical world.
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Similar Jobs
Big Data • Fintech • Mobile • Payments • Financial Services
As a Software Engineer II, you'll build the ML Feature Platform, collaborate on developing backend systems, and ensure operational availability while engaging in team growth.
Top Skills:
AWSKotlinKubernetesMySQLPython
Artificial Intelligence • Machine Learning • Retail • Social Impact • Software
The Staff Software Engineer will enhance the ML platform's performance and scalability, collaborating with teams to support predictive modeling and real-time inference capabilities.
Top Skills:
AirflowDatabricksNumpyPandasPysparkPythonTorch
Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Lead the Front Office Brokerage Operations team, supervising compliance, managing special contributors, and driving operational excellence in a dynamic financial environment.
Top Skills:
Asana)Collaboration Tools (Google WorkspaceConfluence)Crm Platforms (Salesforce)Project Management Tools (Jira
What you need to know about the Charlotte Tech Scene
Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.
Key Facts About Charlotte Tech
- Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
- Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
- Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
- Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus



