Recently, generative AI products have captured significant attention, including in the nonprofit and social impact sectors. These technologies offer the potential to streamline operations, reduce administrative tasks, and amplify impact for the communities served. However, their long-term effects remain uncertain. As technology providers supporting mission-driven organizations, we believe it is essential to approach AI responsibly—ensuring ethical design, robust implementation, and the well-being of end users and beneficiaries.
Navigating AI’s potential means grappling with a complex landscape of ethical, social, and regulatory considerations. Before discussing how nonprofits can guide and evaluate their AI initiatives, it’s important to acknowledge challenges such as bias, privacy concerns, and commercial or surveillance-oriented policies. Recognizing these factors provides critical context for building responsible AI solutions that truly serve the public good while respecting local norms and community values.
Technology providers often highlight AI’s benefits—streamlined processes, insightful data analysis, and potential cost savings. Meanwhile, global, regional, federal, and state bodies are developing frameworks to ensure that AI is used safely and ethically. Among the most influential are the OECD AI Principles, widely recognized for guiding responsible AI governance, and the European Union’s human-rights-based policy framework that emphasizes transparency, accountability, and fundamental rights. In some regions, however, national policies may prioritize commercial or surveillance goals over human rights, underscoring the need for continued advocacy and global dialogue on responsible AI.
This article is not an exhaustive guide or a comprehensive best-practices manual. Instead, it offers our reflections and key lessons from months of evaluating AI solutions specifically for nonprofit contexts. As we introduce AI features into our products, we remain focused on ensuring they truly benefit social impact organizations and the vulnerable communities they serve. Our aim is to equip nonprofit technology leaders with practical steps, questions, and resources for adopting AI responsibly.
This article invites you to pause and critically evaluate AI before adopting it. The prompts we use internally guide our thinking on social impact, risk mitigation, and alignment with ethical best practices.
“When evaluating a use case for an AI-enabled tool, we need to consider the opportunity cost of implementing that solution. We can’t be driven by a desire to digitize for the sake of it; rather, we need to stay singularly focused on the people we’re trying to help, and where there is value in AI, the onus is on us technologists to prove it.”
– Bilal Mateen (Chief AI Officer) – Plan International
1. Ethics & Community Values
Ethics is not a one-size-fits-all concept, especially in the social impact sector. Any ethical framework for AI must be contextual, reflecting the social norms, cultural perspectives, and ethical principles of the communities served by a given nonprofit. By engaging stakeholders—beneficiaries, staff, local leaders, and partners–organizations can ensure their AI solutions honor those values, foster trust, and remain relevant to the local context.
Ask yourself:
- Have you engaged the community you serve in shaping your AI policies and practices?
- Does your approach to AI uphold and respect local social norms and ethical frameworks?
- How might these norms vary across different regions or beneficiary groups?
2. Transparency & Explainability
Being transparent about how AI is used and making it understandable for stakeholders are two essential pillars of ethical and responsible AI. In a social impact context, this clarity is especially important, as trust forms the bedrock of relationships with donors, beneficiaries, and partners.
Ask yourself:
- Have you clearly communicated where and how AI is employed in your programs or tools?
- Can the decisions AI makes be explained in terms that staff, beneficiaries, or donors can understand?
At Vera, we’ve supported numerous foundations and grantmakers in their mission to direct resources effectively. When AI is used for instance, to make recommendations on grant applications, explainability helps stakeholders understand why a specific proposal is rated highly or flagged for review. This clarity not only builds trust but ensures that critical funding decisions are grounded in fairness.
3. Fairness & Bias Mitigation
In the social impact sector, unintentional biases in AI can have real consequences for marginalized communities. Bias arises when the data used to train AI reflects historical or systemic inequities or when AI models inadvertently favor certain groups over others.
Ask yourself:
- Has the AI model been assessed for fairness, particularly regarding the communities you serve?
- Are there processes in place to reduce potential bias in your data or algorithms?
For example, a nonprofit might use AI to analyze beneficiary applications for housing assistance. If the underlying data comes from a region historically underserved or underrepresented, the model could inadvertently overlook certain applicants. By regularly checking the AI’s outputs and addressing discrepancies–perhaps using tools like Error Analysis and Fairlearn–organizations can minimize these unintentional harms.
4. Privacy & Data Protection
Handling sensitive data is a core concern for many nonprofits, as they often work with personal information from volunteers, donors, or beneficiaries. In AI-driven initiatives, maintaining privacy goes beyond basic compliance; it shows respect for the people whose data is being used.
Ask yourself:
- Are you collecting only the data you truly need to achieve your mission?
- Is sensitive data anonymized or encrypted where appropriate?
- Are you following relevant privacy regulations like GDPR, HIPPA, etc.
Keeping beneficiary data safe from misuse is paramount for both ethical and practical reasons. Trust is central to the relationship between nonprofit organizations and the communities they serve, and even minor privacy lapses can undermine that trust.
5. Environmental Impact
Although nonprofits typically run leaner operations compared to commercial enterprises, AI models can still consume substantial energy, especially during training. With environmental sustainability high on many organizations’ agendas, it’s prudent to reflect on how AI adoption might affect your carbon footprint.
Ask yourself:
- Has your team considered energy usage when selecting or implementing AI solutions?
- Are there options to use smaller, more efficient models if they meet your needs?
- Is your infrastructure provider committed to reducing their own environmental footprint?
Progress on measuring AI’s environmental footprint is ongoing. Salesforce’s collaboration with Hugging Face, Cohere, and Carnegie Mellon University on an AI Energy Score provides a way to gauge which models may be more efficient, thereby helping mission-driven organizations stay mindful of sustainability. When it comes to environmental footprint, all language models are not created equal. Tailored Small Language Models, for instance, have a much smaller footprint than generic Large Language Models like Chat-GPT-4, which require significant GPU/TPU power for operation.
6. Robustness & Security
Social impact organizations often operate under resource constraints, making robust and secure technology even more critical. AI solutions should be resilient to errors and external threats, from data breaches to misuse of predictive models.
Ask yourself:
- Is there a clear plan to monitor how AI features perform under real-world conditions?
- Are you prepared to address system failures or misuse, especially in ways that could harm beneficiaries?
According to the OECD, ensuring traceability and risk management in AI systems can help nonprofits address potential issues more effectively. This also builds confidence in the technology for partners and funders.
7. Accountability & Governance
Accountability ensures that AI solutions align with an organization’s mission and values throughout their lifecycle. It involves setting clear lines of responsibility and governance structures to maintain oversight and address challenges.
Ask yourself:
- Who is responsible for the ethical and effective use of AI within your organization?
- Is there a protocol or team dedicated to reviewing AI-generated decisions and handling potential mistakes?
The ISO/IEC 42001 (Draft) offers nonprofits a risk assessment framework for AI governance. Though more formal, it can be adapted to fit an organization’s size and scope, making sure accountability remains front and center.
8. Human Oversight & Control
AI tools should serve to enhance, not replace, human expertise and compassion, hallmarks of the nonprofit sector. Whether it’s frontline social workers using predictive analytics to identify communities in greatest need, health workers using AI-powered decision support tools, or development staff analyzing or automating donor engagement, human judgment is indispensable.
Ask yourself:
- Is there a clear process for staff to review, override, or question AI-generated outcomes?
- Are staff and volunteers adequately trained to interpret AI recommendations?
By retaining human oversight, nonprofits can ensure that decisions remain mission-aligned and ethically grounded, even when assisted by advanced technologies.
9. Social & Business Value
At Vera, we strive to make any AI-related innovation truly worthwhile for mission-driven organizations. Our key takeaways for nonprofits seeking to harness AI responsibly include:
- Don’t add AI just for the sake of it. Pinpoint where AI can genuinely alleviate staff workloads, accelerate important processes, or boost service delivery.
- Data quality matters. AI relies on the integrity of incoming data, so make sure your data is accurate, representative, and well-organized.
- Start small and test. Pilot projects and Proof of Concepts (PoCs) can reveal potential pitfalls before a large-scale launch.
We’d love to collaborate with organizations dedicated to maximizing the social impact of AI. If you’re interested in responsible and sustainable AI solutions, feel free to reach out at ai@verasolutions.org.