Company Description
Givebutter is the most-loved nonprofit fundraising and CRM platform, empowering millions of changemakers to raise more, pay less, and give better. Nonprofits use Givebutter to replace multiple tools so they can launch fundraisers and events, use donation forms and donor management (CRM), send emails and text blasts—all in one place. Use of the Givebutter platform is completely free with a 100% transparent tip-or-fee model.
Givebutter has been certified as a Great Place to Work® every year since 2021, and is the #1 rated nonprofit software company on G2 across multiple categories.
Our mission is to empower the changemaker in all of us. We believe giving should be fun, so you’ll want to do it again, and we also believe that work should be fun, so that you’ll have the greatest impact. We are excited to hear from talented people who want to work with other talented people in making the world a butter place—and have fun along the way.
Role Description
Givebutter is seeking a curious and technically strong Analytics Engineer to join our growing Data team. This role partners closely with engineers, analysts, and stakeholders to understand business needs, uncover insights in our data, and build reliable data systems that scale with the company. You’ll help maintain and expand our analytical data model, monitor and improve our data pipelines, and investigate complex data questions across our systems. As part of this work, you’ll also contribute to the documentation and structured context that helps both stakeholders and internal AI tools effectively interact with our data.
We want to hear from people who…
- Have experience designing and maintaining analytical data models that support reporting and operational decision-making.
- Have worked with modern data stacks and understand how to build and maintain reliable data pipelines.
- Can partner with Product, Engineering, and business stakeholders to understand data needs and translate them into technical solutions.
- Enjoy performing deep data investigation and root-cause analysis when numbers don’t look right.
- Are genuinely excited about working with AI, eager to explore new tools, experiment with use cases, and actively champion its adoption to improve workflows and decision-making across the organization.
Care about documentation and clarity, and want to improve how people interact with data across the company.
Responsibilities...
Data Modeling
- Maintain and expand the company’s analytical data model using Snowflake and dbt, ensuring datasets are reliable, well-structured, and easy to use.
- Partner with stakeholders to understand reporting and analytics needs and translate them into new models and datasets.
- Investigate discrepancies in metrics and datasets and perform root-cause analysis across systems.
Pipeline Monitoring and Maintenance
- Monitor and maintain ELT pipelines across our data stack.
- Investigate and resolve pipeline failures, schema changes, and data inconsistencies.
- Identify opportunities to improve pipeline reliability, efficiency, and cost effectiveness.
Data Documentation and Governance
- Expand documentation across the data model to clearly describe business logic, relationships, and definitions.
- Ensure datasets are clearly structured and documented so they can be reliably used across analytics tools and internal workflows.
- Contribute to the structured AI data context files that help internal AI tools accurately interpret datasets and metrics.
- Help maintain data governance standards, including contributing to PII masking policies and ensuring sensitive customer data is handled appropriately across the data platform.
Stakeholder Partnership
- Work closely with Product, Revenue, and Operations teams to understand their data needs and questions.
- Help stakeholders navigate the data model and identify the most appropriate datasets for their use cases.
Occasionally build or modify Hex projects to support data exploration or reporting needs.
Requirements
- 2+ years of experience working in analytics engineering, data engineering, or analytics roles.
- Strong SQL skills and experience working with relational data warehouses.
- Hands-on experience working with Snowflake as a cloud data warehouse.
- Hands-on experience developing and maintaining models using dbt.
- Experience using Python for data workflows, scripting, or API integrations
- Understanding of analytical data modeling concepts, including fact tables, dimensions, star/snowflake schema, and partitioning.
- Ability to independently investigate and resolve complex data issues across multiple systems.
- Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders.
- Ability to work independently, investigate ambiguous problems, and propose improvements to the data platform.
- Deadline to apply: May 16th at 12:00 AM EDT
Similar Jobs
What you need to know about the Charlotte Tech Scene
Key Facts About Charlotte Tech
- Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
- Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
- Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
- Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus

