Nebius Logo

Nebius

Forward Deployed Engineer, Ecosystem

Posted 2 Days Ago
Remote
Hiring Remotely in United States
255K-315K Annually
Senior level
Remote
Hiring Remotely in United States
255K-315K Annually
Senior level
As a Forward Deployed Engineer, you will design and prototype integrations, scope partner architectures, and translate findings into product requirements to support AI workloads.
The summary above was generated by AI

Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

The role

Nebius builds the infrastructure serious AI teams run on — GPU clusters, inference runtimes, agent development environments, data pipelines — all of it purpose-built for the most demanding AI workloads. What we are now building is the ecosystem function that ensures the best AI companies choose to build on us, integrate with us, and stay. 

As a Forward Deployed Engineer, Ecosystem, you will sit at the intersection of solution architecture and hands-on engineering. You assess how partner products actually work on our stack, define the reference architecture for each integration, build the working prototype that proves it, and translate what you find into product requirements that shape what Nebius ships next. 

Your responsibilities will include:

Solutioning & Architecture 

  • Design and prototype integrations between partner products and the Nebius platform — fast, hands-on, and technically sound
  • Define reference architectures for partner integrations — not just what works, but how it should work at scale and in production
  • Scope partner architectures against our platform — how does this product actually work on our stack, where does it snap together, where does it break
  • Build production-quality proof-of-concepts across the AI stack including agentic pipelines, RAG architectures, inference optimization patterns, and multi-model orchestration
  • Produce working proof-of-concepts that serve as the starting point for product creation — not a requirements doc, a working thing
  • Maintain a library of reference architectures and integration patterns that internal product and engineering teams can build from 

Technical Partner Scoping 

  • Work directly with partner engineering teams to scope, prototype, and progress integrations
  • Assess partner architectures honestly — if the integration is painful, that is signal; if it snaps together in a weekend, that is also signal; report both
  • Provide technical guidance to partners on how to maximize performance, reliability, and cost efficiency on Nebius infrastructure
  • Produce technical scoping that gives your pod partner and internal teams a clear picture of integration feasibility, depth, and complexity 

Internal 

  • Translate external integration findings into actionable product requirements for Nebius platform teams
  • Work with ISV partners, SI teams, and field teams to scale solution adoption and drive revenue once a solution is ready
  • Surface recurring architectural patterns and integration gaps to inform platform roadmap decisions
  • Participate in platform planning as the technical voice of what you are seeing and building in the field 

Ecosystem Presence 

  • Represent Nebius at hackathons, in open source communities, and at technical events
  • Build in public — demos, reference architectures, and integrations that establish Nebius as the platform serious AI builders choose
  • Stay current with the AI tooling ecosystem — you know what shipped last week and what it means for our stack 

Platform focus areas: 

Depending on your background and mutual fit, you will focus on one or more of the following: 

  • Agentic — agent frameworks, memory systems, tool integration, orchestration, MCP, guardrails
  • Managed Inference — inference runtimes, model serving, optimization tooling, speculative decoding, KV-cache routing
  • IaaS / Managed Infrastructure — cloud-native integrations, GPU orchestration, enterprise platform connectors
  • Data — vector databases, retrieval systems, RAG architectures, data pipeline integrations, synthetic data tooling 

We expect you to have:

  • 6+ years of hands-on engineering experience in AI application development, ML systems, or AI infrastructure
  • Deep working knowledge of the AI developer stack — LLM APIs, inference runtimes, orchestration frameworks, vector databases, RAG architectures, agentic pipelines — built through shipping, not reading
  • Hands-on experience with agentic frameworks such as LangChain, LangGraph, CrewAI, AutoGen, or equivalent
  • Strong Python programming skills and comfort prototyping end-to-end AI systems quickly
  • Experience defining reference architectures and technical patterns — not just implementing them
  • Proven ability to move from idea to working prototype fast — you have shipped meaningful things under time pressure and found it energizing
  • Experience building integrations across APIs and developer platforms — you understand where the complexity actually lives
  • Comfortable working across both external partner engineering teams and internal Nebius product and engineering teams simultaneously
  • Strong technical communication — you can explain architecture decisions and integration findings to a founding CTO and a non-technical partner lead in the same day 

It will be an added bonus if you have: 

  • Experience with inference frameworks and optimization: vLLM, SGLang, TensorRT-LLM, speculative decoding, quantization, batching, KV-cache routing
  • Familiarity with NVIDIA's software stack: CUDA, TensorRT, NeMo, or equivalent
  • Experience with multimodal AI models — vision-language, speech, or structured data
  • Won or placed at major AI hackathons in the past 12 months
  • Worked as a developer advocate, solutions engineer, or technical partner manager at a leading AI platform or developer tooling company
  • Been an early engineer at a YC-backed AI startup — you built the product under real constraints
  • Open source projects or public demos with meaningful community adoption
  • Proficiency with DevOps tools: Docker, Kubernetes, Git 

Preferred technical stack: 

  • Languages — Python
  • ML frameworks — vLLM, SGLang, TensorRT-LLM, Transformers, OpenAI / Anthropic SDKs
  • Agentic frameworks — LangChain, LangGraph, CrewAI, AutoGen, smolagents, or equivalent
  • Vector databases — Qdrant, Weaviate, Milvus, pgvector
  • API and web frameworks — FastAPI, Flask
  • DevOps — Kubernetes, Docker, Git
  • Cloud platforms — AWS, GCP, Azure 

Key Employee Benefits:

  • Health Insurance: 100% company-paid medical, dental, and vision coverage for employees and families.
  • 401(k) Plan: Up to 4% company match with immediate vesting.
  • Parental Leave: 20 weeks paid for primary caregivers, 12 weeks for secondary caregivers.
  • Remote Work Reimbursement: Up to $85/month for mobile and internet.
  • Disability & Life Insurance: Company-paid short-term, long-term, and life insurance coverage.

Compensation

We offer competitive salaries, ranging from $255K - $315K OTE (On-Target Earnings) and equity based on your experience, skills, and location.

If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Compensation

We offer competitive compensation packages based on experience.

Compensation Range
$255,000$315,000 USD

What we offer: 

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Equal Opportunity Statement:

Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.

Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.

Similar Jobs

3 Hours Ago
In-Office or Remote
107K-182K Annually
Senior level
107K-182K Annually
Senior level
Artificial Intelligence • Hardware • Information Technology • Machine Learning
The Thin Films Engineer develops deposition processes for EUV-enabled DRAM, focusing on process optimization and defect reduction while collaborating with multifunctional teams.
Top Skills: AfmAldCvdEuvLpcvdPe-AldPecvdSimsXpsXrd
3 Hours Ago
Remote or Hybrid
Orange, AL, USA
116K-145K Annually
Senior level
116K-145K Annually
Senior level
Cloud • Fintech • Information Technology • Machine Learning • Software
The Manager, Enterprise Sales will lead a field team to drive activation and usage across strategic national franchise networks, coaching sales efforts and collaborating with various teams to enhance partner success and product usage.
Top Skills: B2B SaasCrm SystemsFintech
3 Hours Ago
In-Office or Remote
62K-111K Annually
Senior level
62K-111K Annually
Senior level
Fintech
The Project Change Manager ensures successful adoption of project changes by developing strategies, engaging stakeholders, and monitoring adoption metrics for organizational readiness.
Top Skills: ExcelMicrosoft OutlookMicrosoft PowerpointMicrosoft TeamsMicrosoft WordNice Call Center ProductsSalesforce

What you need to know about the Charlotte Tech Scene

Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.

Key Facts About Charlotte Tech

  • Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
  • Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
  • Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
  • Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account