Deepgram Logo

Deepgram

Platform Engineer – AI/ML Infrastructure

Posted 10 Hours Ago
Remote
2 Locations
160K-220K Annually
Senior level
Remote
2 Locations
160K-220K Annually
Senior level
The role involves architecting and managing hybrid infrastructure for AI/ML, focusing on Kubernetes, AWS, and Infrastructure-as-Code practices.
The summary above was generated by AI
Company Overview

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram

Opportunity:

We're looking for an expert (Senior/Staff-level) Platform Engineer to build and operate the hybrid infrastructure foundation for our advanced AI/ML research and product development. You'll architect, build, and run the platform spanning AWS and our bare metal data centers, empowering our teams to train and deploy complex models at scale. This role is focused on creating a robust, self-service environment using Kubernetes, AWS, and Infrastructure-as-Code (Terraform), and orchestrating high-demand GPU workloads using schedulers like Slurm.

What You’ll Do

  • Architect and maintain our core computing platform using Kubernetes on AWS and on-premise, providing a stable, scalable environment for all applications and services.

  • Develop and manage our entire infrastructure using Infrastructure-as-Code (IaC) principles with Terraform, ensuring our environments are reproducible, versioned, and automated.

  • Design, build, and optimize our AI/ML job scheduling and orchestration systems, integrating Slurm with our Kubernetes clusters to efficiently manage GPU resources.

  • Provision, manage, and maintain our on-premise bare metal server infrastructure for high-performance GPU computing.

  • Implement and manage the platform's networking (CNI, service mesh) and storage (CSI, S3) solutions to support high-throughput, low-latency workloads across hybrid environments.

  • Develop a comprehensive observability stack (monitoring, logging, tracing) to ensure platform health, and create automation for operational tasks, incident response, and performance tuning.

  • Collaborate with AI researchers and ML engineers to understand their infrastructure needs and build the tools and workflows that accelerate their development cycle.

  • Automate the life cycle of single-tenant, managed deployments

You’ll Love This Role If You

  • Are passionate about building platforms that empower developers and researchers.

  • Enjoy creating elegant, automated solutions for complex infrastructure challenges in both cloud and data center environments.

  • Thrive on optimizing hybrid infrastructure for performance, cost, and reliability.

  • Are excited to work at the intersection of modern platform engineering and cutting-edge AI.

  • Love to treat infrastructure as a product, continuously improving the developer experience.

It’s Important To Us That You Have

  • 5+ years of experience in Platform Engineering, DevOps, or Site Reliability Engineering (SRE).

  • Proven, hands-on experience building and managing production infrastructure with Terraform.

  • Expert-level knowledge of Kubernetes architecture and operations in a large-scale environment.

  • Experience with high-performance compute (HPC) job schedulers, specifically Slurm, for managing GPU-intensive AI workloads.

  • Experience managing bare metal infrastructure, including server provisioning (e.g., PXE boot, MAAS), configuration, and lifecycle management.

  • Strong scripting and automation skills (e.g., Python, Go, Bash).

  

It Would Be Great if You Had 

  • Experience with CI/CD systems (e.g., GitLab CI, Jenkins, ArgoCD) and building developer tooling.

  • Familiarity with FinOps principles and cloud cost optimization strategies.

  • Knowledge of Kubernetes networking (e.g., Calico, Cilium) and storage (e.g., Ceph, Rook) solutions.

  • Experience in a multi-region or hybrid cloud environment.

Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!

Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.

We are happy to provide accommodations for applicants who need them.

Compensation Range: $160K - $220K


#BI-Remote

Top Skills

AWS
Bash
Calico
Ceph
Ci/Cd
Cilium
Go
Kubernetes
Python
Rook
Slurm
Terraform

Similar Jobs at Deepgram

10 Hours Ago
Remote
2 Locations
160K-220K Annually
Senior level
160K-220K Annually
Senior level
Artificial Intelligence • Machine Learning • Natural Language Processing • Software
The Full Stack Engineer will build the Internal Developer Platform, developing web services, APIs, and automation tools to enhance engineering workflows and productivity.
Top Skills: DockerGoKubernetesNode.jsPulumiPythonReactTerraformVue
10 Hours Ago
Remote
2 Locations
160K-220K Annually
Senior level
160K-220K Annually
Senior level
Artificial Intelligence • Machine Learning • Natural Language Processing • Software
Design, implement, and maintain large-scale backend systems, focusing on network architecture, data pipelines, and AI integration for developer platforms.
Top Skills: DockerGoKafkaKubernetesNatsNode.jsPulumiPythonTerraform
Yesterday
Remote
2 Locations
180K-220K Annually
Mid level
180K-220K Annually
Mid level
Artificial Intelligence • Machine Learning • Natural Language Processing • Software
The Technical Product Manager will lead the development of voice AI technologies, focusing on product design and collaboration with research and engineering teams to enhance user experience.
Top Skills: AIAPIsDeveloper PlatformsMlReal-Time Streaming Technologies

What you need to know about the Charlotte Tech Scene

Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.

Key Facts About Charlotte Tech

  • Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
  • Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
  • Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
  • Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account