At Blackbaud, we are committed to building AI systems that are safe, trustworthy, and aligned with human values, especially in service of social impact. Our mission is to empower nonprofits, foundations, and educational institutions through responsible innovation. AI plays a critical role in this journey, and must reflect the needs, behaviors, and values of the communities we serve.
About the role
The Responsible AI Research Scientist is a strategic member of the Data & AI team. This role drives research at the intersection of machine learning, behavioral science, and AI alignment - advancing Blackbaud’s Responsible AI maturity by designing, implementing, and validating RAI standard-based frameworks, methods, and tools that ensure our AI systems are ethical, inclusive, and effective.
This role operationalizes the NIST AI RMF across the product lifecycle - embedding governance, risk mapping, measurement, and ongoing management into development workflows and feature delivery, so our AI capabilities are trustworthy and demonstrably aligned with stakeholder intent.
Partnering closely with Data Science, Engineering, Product, Legal, Governance teams, etc., this role serves as a thought leader and subject matter expert for Responsible AI across the enterprise. The Responsible AI Research Scientist will help shape how we innovate responsibly, safeguard organizational trust, and strengthen Blackbaud’s leadership in Responsible AI for social impact.
What you'll be doing
- Lead the design and execution of research to evaluate and improve the safety, trustworthiness, and alignment of AI systems, with a focus on enterprise-scale impact and regulatory compliance
- Design and conduct research to evaluate and improve the safety, trustworthiness, and alignment of AI systems, with a focus on nonprofit and mission-driven contexts.
- Develop and validate scalable evaluation frameworks (aligned to standards such as NIST AI RMF) to measure how well AI systems reflect human intent, ethical standards, and real-world utility.
- Advance AI alignment strategies by modeling human preferences and behaviors using both qualitative and quantitative methods.
- Contribute to frontier research to anticipate future trends and risks in Responsible AI, integrating foresight and reflection to maximize benefits and avoid harm.
- Create and test new technical paradigms for Human-AI interaction, including feedback loops, oversight mechanisms, and interfaces that empower users and stakeholders.
- Collaborate with cross-functional teams (Data Science, Engineering, Product, Legal, Governance, etc.) to embed Responsible AI practices and frameworks into the development lifecycle.
- Publish research and thought leadership to further position Blackbaud’s Intelligence for Good as a trusted leader in Responsible AI for social impact, including contributions to the Blackbaud Institute and external forums.
- Serve as a key advisor to the CDAIO on responsible AI strategy, governance, and risk management.
What we'll want you to have
- 10+ years of experience working in or with centralized Data & AI teams, and a proven track record in enterprise AI governance, risk management, and regulatory compliance.
- Demonstrated expertise in machine learning, AI alignment, and\or Responsible AI research.
- Experience developing and validating evaluation frameworks for AI systems (preferably aligned to NIST, ISO, or similar standards).
- Strong background in behavioral science, computational linguistics, or related fields.
- Track record of publishing impactful research in AI safety, ethics, or alignment.
- Ability to translate complex research into actionable recommendations for technical and non-technical audiences.
- Motivated by Blackbaud’s mission to amplify social impact through technology.
- Demonstrated ability to influence organizational policy and technical strategy
Stay up to date on everything Blackbaud, follow us on Linkedin, Twitter, Instagram, Facebook and YouTube
Blackbaud powers social impact through purpose‑driven technology and responsible AI. Guided by our Intelligence for Good® vision, we’re building a culture where innovation, trust, and human expertise come together to help organizations make a greater difference in the world.
Blackbaud is proud to be an equal opportunity employer and is committed to maintaining a diverse and inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
The starting base pay is $176,800.00 to $224,900.00. Blackbaud may pay more or less based on employee qualifications, market value, Company finances, and other operational considerations.Benefits Include:
Medical, dental, and vision insurance
Remote-flexible workforce
Wellness Programs
401(k) program with employer match
Flexible paid time off
Generous Parental Leave
Donations for Doers
Pet insurance, legal and identity protection
Tuition reimbursement program
Top Skills
Similar Jobs
What you need to know about the Charlotte Tech Scene
Key Facts About Charlotte Tech
- Number of Tech Workers: 90,859; 6.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Lowe’s, Bank of America, TIAA, Microsoft, Honeywell
- Key Industries: Fintech, artificial intelligence, cybersecurity, cloud computing, e-commerce
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (CED)
- Notable Investors: Microsoft, Google, Falfurrias Management Partners, RevTech Labs Foundation
- Research Centers and Universities: University of North Carolina at Charlotte, Northeastern University, North Carolina Research Campus



.png)