When Governance Meets Algorithms: The Next Chapter for Nonprofit Boards and Executive Directors
A Sneak Peak into the new Monthly Insight Brief for paid members
As AI moves from an experiment to an everyday tool inside nonprofit organizations, board members and Executive Directors will feel the ground shifting under them.
For decades, nonprofit boards have been guardians of mission and money. Ensuring resources were well managed and missions stayed on track. Partnering with Executive Directors as leaders of strategy, teams, and advancing mission.
But with the rapid adoption of AI, the locus of responsibility is expanding.
Neither role description, at least historically, included overseeing algorithmic risk. As AI use widens across nonprofit organizations, governance is quietly expanding.
Boards and Executive Directors are becoming responsible not only for oversight of people and money, but also for:
Data governance
Algorithmic risk
Ethical use of generative tools
Transparency in how decisions are supported by technology
Accountability and traceability
Cybersecurity and data protection
Human capital and culture
And might soon discover they were trained for yesterday’s version of leadership.
AI Is Not Just a Tech Upgrade
When nonprofit leaders hear “AI,” they often think efficiency. Faster drafts. Smarter segmentation. Better forecasting.
But AI is not just a productivity tool. It is also a systems accelerator. Consider what that looks like in practice as AI becomes more widely adopted in our sector.
Data Governance
Current AI tools rely on large datasets. Weak data hygiene becomes mission risk. If your data is incomplete, biased, unsecured, or poorly categorized, AI will scale those flaws.
Here, the ED must ensure operational clarity. What data is collected, why, how it is stored, who can access it, how long you retain it, and how it is protected.
The board ensures oversight. Are there policies, controls, and periodic reviews?
Algorithmic Risk
If an AI tool flags clients, ranks grant proposals, segments donors, or suggests program eligibility, the board must ask thoughtful questions:
What assumptions are embedded?
What unintended exclusions might occur?
The ED then translates these questions into internal review processes.
Ethical Use of Generative Tools
Generative tools are trained on large datasets and, in some cases, may retain or learn from inputs. That creates risk if confidential information is entered without safeguards. When staff use AI to draft communications, proposals, reports, or internal analysis, for example. Now the guardrails are:
Where is human judgment required?
What must be reviewed before publication?
How do you protect confidential data, donor information, or lived experience narratives from being uploaded into public systems?
The ED establishes guardrails and training like usage policies and review processes.
The board ensures that ethical standards are explicit and written down.
Transparency in Decision Support
If a model flags which clients are prioritized for services, can staff explain the criteria? If an application is screened or scored by software, can the organization describe how that works?
The ED designs processes that maintain human review and explanation.
The board reinforces transparency as a trust strategy.
Accountability and Traceability
Ensure there is a clear chain of responsibility when AI tools inform or automate decisions, especially around service delivery or donor engagement.
If something goes wrong, the answer cannot be, “the system did it.” Someone owns the process. Someone reviews the output. Someone remains accountable.
Here, the ED ensures clear lines of responsibility.
The board ensures accountability structures.
Cybersecurity and Data Protection
As AI systems integrate with more tools and data sources, exposure increases. More integration often means more vulnerability. So we need to view AI expansion as part of enterprise risk management, not as just a shortcut to productivity.
The ED manages operational safeguards.
The board incorporates AI into enterprise risk oversight.
Human Capital and Culture
Surprisingly, or likely not if you’re already an AI user, oversight now includes how AI affects staff roles, skills, morale, and ethical norms.
Are staff being supported and trained?
Is AI reducing burnout or overloading teams with just faster expectations?
Does automation align with the organization’s values around dignity and service?
The ED leads staff through that transition.
The board ensures that automation aligns with the organization’s mission and values.
The Strategic Opportunity for Partnership
There is a risk here. But there is also a rare leadership opportunity.
Boards that approach AI as a shared governance conversation rather than a compliance checklist will strengthen trust with their Executive Director.
Executive Directors who proactively surface AI use, risks, and guardrails will strengthen trust with their boards.
Instead of reacting to a crisis, together Boards & EDs can design intentional oversight and build strong partnership muscles.
If what’s outlined above caused you a bit of panic, no need. Here are practical next steps you can take right now:
Conduct a joint AI inventory.
Where are tools currently used?
For what decisions?
With what data?Define non-negotiables.
What data may never be entered into public systems?
What decisions always require human review?Clarify ownership.
For each AI-supported process, who is responsible for oversight?Integrate AI into your risk dashboard
Add it to your regular risk reviews around data quality, cybersecurity, and how tools are actually being used. If it affects decisions or data, it belongs on the dashboard.Integrate AI use as part of the annual ED evaluation conversation.
AI use is now part of leadership. Boards can ask simple questions like How is AI helping advance the mission? What guardrails are in place? How’s it affecting staff and ethics? This shows that stewardship of technology is part of the ED’s job, not just an IT function.Regularly review technology alignment with mission and values.
Do your technology choices reflect your values?
Integrate AI into your mission lens.
Do your technology choices reflect your values?
Are you reinforcing equity and dignity, or unintentionally weakening them?
You do not need a technologist on every board. You do, however, need a strong board and Executive Director partnership more than ever.
Because AI will magnify whatever systems you already have. This issue is already shaping how nonprofits operate. It’s here, not on the horizon.
If you like this essay, I’d love if you’d share it.
PS
This essay is the first in what will be monthly Insight Briefs for Executive Directors and Boards of Directors inside the paid membership option. Priced really helpfully at
$20 per month or $197 per year. Not a typo :)
Some months will combine both perspectives, like this one. More often, you’ll receive separate editions. One tailored for Executive Directors, and one for nonprofit Boards of Directors.
Neither group has the time or bandwidth to scan every trend, insight, and policy change shaping our sector. That’s where The Pocket COO comes in: concise, practical foresight to help you stay informed and lead with confidence. Without the overwhelm.
Almost like having your own COO. Because I won’t leave you with just the data, I’ll also help you translate it into action, offering clear next steps and tools you can put to work right away.

