IXIS Logo

IXIS

Data Analytics Engineer

Posted 4 Hours Ago
Remote
Hiring Remotely in United States
Mid level
Remote
Hiring Remotely in United States
Mid level
As a Data Analytics Engineer, you will design and manage data pipelines, integrate diverse datasets, and enhance data quality and performance within our cloud environment. You'll collaborate with teams and contribute to code reviews and technical direction.
The summary above was generated by AI

About IXIS

IXIS is a digital consultancy and technology company that empowers organizations to make smarter, faster decisions through the seamless integration of strategy, technology, and analytics. Since 2012, we’ve helped leading brands harness their marketing, advertising, and customer experience data to unlock insights, improve performance, and achieve digital transformation. Our expert team spans media strategy, data governance, analytics enablement, and platform implementation.

At the heart of our offering is ATLAS, our proprietary data activation platform, which simplifies complex data challenges by consolidating, transforming, and delivering data across tools, teams, and workflows. With ATLAS, our clients gain full visibility and control over their data ecosystems, driving measurable results and operational efficiency.

We offer competitive compensation packages including health, dental, short-term and long-term disability and vision insurance, 401(k) with company match, flexible work schedules, wellness plan, and exceptional growth opportunities.

This is a full-time remote position with the option of working from our office in Burlington, VT.

Overview: IXIS is seeking a senior-level Data Analytics Engineer to join our Data Analytics Engineering (DAE) team. You will play a key role in managing and evolving our data ingestion, sanitation, and transformation pipelines. Our team handles complex client data, joining Adobe and GA4 clickstream data with social, CRM and other business data to create metrics and segments that power our data visualization products.

This is a hands-on individual contributor (IC) role. You will be expected to lead by example through high-quality design, coding, and problem-solving and contribution to technical direction.

You will be complementing an experienced team lead and will collaborate with other team members while contributing your own perspective and best practices. We are particularly interested in candidates who have seen different ways of doing things and can help us evolve by improving our data quality, scalability, and overall pipeline performance.

This is a high-impact role where you’ll help shape our technical direction, improve existing systems, and introduce new tools and workflows to make our data products even better.

Success in this role looks like:

  • Designing performant data pipelines for ingestion and transformation of complex datasets into usable data products.
  • Building scalable infrastructure that supports hourly, daily, and weekly update cycles.
  • Implementing automated QA checks and monitoring that catch data anomalies before they reach clients.
  • Re-architecting parts of our system to improve performance or reduce cost.
  • Supporting team members through code reviews and collaboration.

Team & Collaboration:
You’ll be working alongside a senior team lead who sets technical direction, while also collaborating with other engineers, QA, data scientists, and client teams. You’ll be expected to contribute both as a builder and a mentor (everyone is a mentor, it’s part of our culture).

Responsibilities:

  • Build enterprise-grade batch and real-time data processing pipelines using AWS with a focus on serverless architectures.
  • Design and implement automated ELT processes to integrate disparate datasets.
  • Work across multiple teams to ingest, extract, and process data via Python, R, zsh, SQL, REST, and GraphQL APIs.
  • Join and transform clickstream and CRM data into meaningful metrics and segments for visualization.
  • Create automated acceptance, QA, and reliability checks for business logic and data integrity.
  • Design appropriately normalized schemas and determine when to use SQL vs NoSQL solutions.
  • Optimize infrastructure and schema design for performance, scalability, and cost.
  • Help define and maintain CI/CD and deployment pipelines for data infrastructure.
  • Containerize and deploy solutions using Docker and AWS ECS.
  • Proactively identify and resolve data discrepancies and implement safeguards to prevent recurrence.
  • Contribute to documentation, onboarding materials, and cross-team enablement.

Required Education and Skills:

  • B.A./B.S. in Computer Science, Software Engineering, or a related field; training in statistics/mathematics/machine learning is a plus.
  • 3-5 years of experience building scalable, reliable data pipelines and data products in a cloud environment (AWS preferred).
  • Deep understanding of ELT processes and data modeling principles.
  • Strong programming skills in Python (or similar scripting languages).
  • Advanced SQL skills and intermediate to advanced relational database design experience.
  • Familiarity with joining large behavioral datasets like Adobe and GA4 clickstream data.
  • Excellent problem-solving skills and attention to data detail.
  • Experience managing and prioritizing multiple initiatives with minimal supervision.

Additional Desired Skills:

  • Experience with dbt or other transformation-layer tools.
  • Familiarity with Docker containerization and orchestration.
  • Experience with statistical programming (R or Python preferred).
  • API design or integration experience for data pipelines.
  • Experience developing in a Linux or Mac environment.
  • Exposure to data QA frameworks or observability tools (e.g. Great Expectations, Monte Carlo, etc.).

If you’re passionate about turning raw data into reliable, actionable insight—and want to help shape the future of data engineering at a growing SaaS company—we’d love to hear from you.

Top Skills

AWS
Docker
GraphQL
NoSQL
Python
Rest
SQL

Similar Jobs

7 Days Ago
Easy Apply
Remote or Hybrid
USA
Easy Apply
200K-250K Annually
Senior level
200K-250K Annually
Senior level
Fintech • Information Technology • Software • Financial Services
The Senior Data Engineer will design and implement scalable data models in BigQuery using dbt for analytics and reporting, ensuring data governance and optimal performance.
Top Skills: BigQueryDbtFivetranSQL
13 Days Ago
Remote
United States
199K-269K Annually
Expert/Leader
199K-269K Annually
Expert/Leader
Artificial Intelligence • Cloud • Consumer Web • Productivity • Software • App development • Data Privacy
Lead design and implementation of shared, reusable analytics data models and pipelines. Drive standardization, governance, observability, and CI/CD for analytics; partner with Data Science, Infrastructure, and Product to certify metrics, modernize orchestration, and integrate AI-native tooling.
Top Skills: AirflowDbtPythonSpark SqlSQL
5 Days Ago
Remote
United States
100K-125K Annually
Senior level
100K-125K Annually
Senior level
Cloud • Fintech • Productivity • Software
The Senior Data Analytics Engineer will design and maintain data pipelines, optimize data models, and implement best practices for data governance and performance in a SaaS environment.
Top Skills: DbtGainsightPythonSalesforceSnowflakeSQLTableauZendesk

What you need to know about the Charlotte Tech Scene

Ranked among the hottest tech cities in 2024 by CompTIA, Charlotte is quickly cementing its place as a major U.S. tech hub. Home to more than 90,000 tech workers, the city’s ecosystem is primed for continued growth, fueled by billions in annual funding from heavyweights like Microsoft and RevTech Labs, which has created thousands of fintech jobs and made the city a go-to for tech pros looking for their next big opportunity.

Key Facts About Charlotte Tech

  • Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
  • Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
  • Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
  • Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account