The Paradox in Practice
A community behavioral health organization in central Washington has a $300,000 annual budget and four full-time employees. The executive director doubles as the clinical supervisor. The bookkeeper works half-time. There is no development staff, no grant writer, no compliance officer. The organization has been delivering crisis services and outpatient counseling for six years, funded by a patchwork of Medicaid reimbursement, a county contract, and one modest foundation grant.
A regional funder announces a $75,000 capacity building grant. The purpose: to help small organizations like this one develop financial policies, upgrade accounting systems, and build the administrative infrastructure needed to pursue larger grants. The executive director applies. It takes her two weeks — time carved from clinical supervision and community outreach — to assemble the application. She wins the award.
Then the grant agreement arrives.
Quarterly financial reports. Semi-annual narrative reports describing progress against a detailed work plan. An annual evaluation documenting capacity change using pre- and post-assessment tools. Documented procurement for any purchase over $3,000. Time and effort documentation for all staff charged to the grant. A final sustainability report explaining how the organization will maintain its new capacity after the funding ends.
Over the next twelve months, the executive director spends roughly 20 percent of her time on grant administration — meeting with the funder's program officer, compiling documentation, reviewing narrative drafts, preparing for a site visit. The bookkeeper spends a third of her already half-time hours on financial reporting for this single grant. A consultant is hired to conduct the organizational assessment, develop the financial policies, and install the new accounting software. The consultant's fee, combined with the staff time allocated to grant compliance and the cost of the accounting system itself, means the effective investment in actual capacity building — the policies, the system, the training — is perhaps $45,000 to $50,000 of the $75,000 award.
By the end of the grant period, the organization has better financial policies. It has a real accounting system. These are genuine improvements. But it also has a compliance obligation that will outlast the grant: record retention requirements, closeout documentation, and the possibility of a post-award audit. And the staff capacity consumed by administering the grant — that 20 percent of the ED's time, those hours the bookkeeper spent on reporting — was capacity that was not available for the mission. For twelve months, the organization ran harder to stay in the same place.
This is not an unusual story. It is, in its essential features, the most common capacity building story there is. And the question it raises is uncomfortable: did this grant make the organization stronger, or did it just make the organization busier?
What Capacity Building Is Supposed to Do
The premise of capacity building funding is sound. Organizations that lack administrative infrastructure struggle to compete for grants, manage awards, and demonstrate impact. Investing in that infrastructure should, in theory, make them more competitive, better managed, and ultimately more effective at serving their communities.
The standard logic model looks like this: invest in organizational infrastructure now, the organization becomes more competitive and better managed, it wins more and larger funding, it delivers more impact, and the initial investment pays for itself many times over.
Common capacity building activities include strategic planning, board development, financial system upgrades, human resources policy development, fundraising infrastructure, technology adoption, staff training, and evaluation design. These are real needs. Any executive director who has tried to compete for a federal grant without written procurement policies, a current indirect cost rate, or a functional accounting system will tell you that the infrastructure gap is not theoretical.
And the logic model works — when the investment is genuinely infrastructure-building and the administrative burden of the grant itself is proportional to the award. The problem is that these two conditions are frequently not met simultaneously. The compliance gap we described in our series opener is real, and capacity building grants are supposed to close it. Too often, they widen it instead.
Where It Breaks Down
The capacity building trap has five distinct failure modes. Most capacity building grants that underperform exhibit at least two of them.
Disproportionate Compliance Burden
A $50,000 capacity building grant administered under full federal compliance requirements — 2 CFR Part 200, or a foundation's equivalent — imposes essentially the same administrative burden as a $500,000 program grant. The same quarterly financial reports. The same narrative documentation. The same procurement rules, time and effort tracking, and closeout procedures. The compliance cost as a percentage of the award is ten times higher for the smaller grant. This is not an exaggeration; it is arithmetic.
Funders rarely scale their compliance requirements to the size of the award. A $500,000 program grant with a 15 percent administrative burden leaves $425,000 for program delivery. The ratio is manageable. A $50,000 capacity building grant with the same administrative structure might consume 25 to 35 percent of the award in compliance costs, leaving $32,000 to $37,000 for the actual capacity building. At that point, the grant is funding its own administration as much as it is funding the organization's development.
The Reporting Paradox
Funders want to know if capacity building worked. This is reasonable. They have a fiduciary obligation to their boards, their donors, or the public. But measuring capacity change requires sophisticated evaluation methods — organizational assessments at baseline and completion, maturity models, longitudinal tracking of indicators like revenue diversification, staff retention, and audit outcomes. These evaluation methods cost money and require expertise that small organizations do not have.
The result is a paradox: the grant requires the grantee to prove that capacity was built, but proving it consumes the capacity the grant was supposed to build. Organizations end up hiring evaluators, purchasing assessment tools, and spending staff time documenting change rather than creating it. The evaluation tail wags the capacity building dog.
The most perverse version of this is the grant that funds a consultant to conduct an organizational assessment, develop a capacity building plan, and then evaluate whether the plan was implemented — all within the same twelve-month grant period. The consultant writes a plan in months one through three, the organization attempts to implement it in months four through nine, and the consultant returns in months ten through twelve to evaluate progress. What gets measured is compliance with the plan, not whether the organization is actually stronger.
Short Timelines
Organizational transformation takes time. Installing a new accounting system is a six-month project. Learning to use it well is a two-year process. Developing and internalizing financial management policies — so they become institutional practice rather than documents in a binder — takes longer still. Board development is measured in years, not quarters. Building a culture of data-driven decision-making requires not just tools but habits, and habits form slowly.
Most capacity building grants run one to two years. Many run twelve months. This is long enough to purchase a system and write a manual. It is not long enough to change how an organization operates. The predictable outcome: at closeout, the grantee can document that deliverables were completed — the strategic plan was written, the software was installed, the training was conducted — but the organization's actual operating capacity has not shifted in a durable way.
Mismatched Assumptions
Capacity building grants frequently assume a baseline level of administrative capacity that the applicant organization does not possess. The application itself is the first test. A small organization with no grant writer, no development director, and an executive director who is also managing client services must find the time and expertise to complete a detailed application — often including a needs assessment, a logic model, a work plan with milestones, a budget with justification, and letters of commitment from partners.
This is a capacity filter masquerading as a grant opportunity. The organizations most in need of capacity building are the least equipped to navigate the application process. They do not know the vocabulary. They do not have the templates. They cannot afford to spend forty hours on an application with a 20 percent funding rate. (If your organization is in this position, our first-time applicant guide for Washington State walks through the foundational steps — but even that guide assumes a minimum threshold of staff time and organizational stability.) The organizations that win capacity building grants tend to be the ones that already have enough capacity to compete for them — which raises an uncomfortable question about who this funding is really reaching.
The Consultant Trap
A significant portion of capacity building funding flows to consultants. This is not inherently problematic — external expertise is often necessary for specialized tasks like financial system selection, board governance restructuring, or strategic planning facilitation. The problem is what happens after the consultant leaves.
The deliverable is typically a report: a strategic plan, an organizational assessment, a set of policy templates, a technology recommendation. The organization receives a polished document that it did not write, reflecting analysis it did not conduct, recommending changes it may not have the resources or institutional will to implement. The consultant moves on to the next engagement. The plan goes into a drawer. Six months later, nothing has changed — except the grant funds have been spent.
Effective consulting builds internal capacity. It involves staff deeply in the process, transfers knowledge explicitly, and includes implementation support. But implementation support is expensive and extends the engagement timeline. Most capacity building grants do not fund it. They fund the assessment and the plan. The follow-through is left as an exercise for the reader.
The Funder's Dilemma
It would be easy to frame this as a story about funders doing harm. That framing would be wrong. Funders — whether they are federal agencies, state programs, or private foundations — operate within real constraints, and the compliance structures that create the capacity building trap exist for defensible reasons.
Accountability requirements are real. Public funds require documentation. Foundation boards require evidence of impact. An inspector general, a legislative auditor, or a board finance committee that discovers lax oversight of grant funds will not accept “we trusted them” as an explanation. Reducing compliance is not free; it shifts risk from the grantee to the funder. For public agencies operating under 2 CFR Part 200, many reporting requirements are not optional — they are regulatory mandates.
Adverse selection is a genuine concern. Without application requirements, funders cannot distinguish between organizations that need and will use capacity building effectively and those that will treat the grant as general operating support by another name. Application requirements serve a screening function. Reduce them too far and you lose the ability to make informed funding decisions. This tension does not have a clean resolution.
Trust requires relationship. Trust-based philanthropy advocates for simplified reporting and unrestricted funding, and the evidence increasingly supports this approach for organizations the funder knows well. But for new relationships — a foundation making its first grant to an organization it has never worked with — some verification is reasonable. The question is proportionality, not whether verification should exist at all.
Portfolio management creates standardization pressure. A foundation making fifty capacity building grants needs a way to evaluate its portfolio. Did the investment work? Across which types of organizations? Under what conditions? Answering these questions requires standardized data, which requires standardized reporting, which imposes standardized burden — regardless of whether the grantee has a $200,000 budget or a $20 million budget.
These are not excuses. They are structural realities. And any serious proposal for improving capacity building funding must work within them, not wish them away. The funding cliff problem we examined in our analysis of grant cycles compounds all of this: even when capacity building grants work, the gains erode if the organization cannot sustain the new infrastructure beyond the award period.
What Effective Capacity Building Looks Like
The structural problems are real, but they are not inevitable. There are capacity building approaches that work — that genuinely strengthen organizations without consuming the capacity they aim to create. They share a set of common characteristics.
Proportional compliance. A $50,000 grant should not carry the same reporting requirements as a $500,000 grant. Tiered compliance — where reporting frequency, documentation requirements, and evaluation expectations scale with award size — is the single most impactful structural change funders can make. Some foundations have already adopted this: grants under $100,000 require an annual narrative and a final financial report; grants over $500,000 carry full quarterly reporting. The administrative burden matches the investment.
Multi-year commitment. Three to five-year awards with realistic milestones, not twelve-month sprints. Organizational change takes time. A funder that genuinely wants to build capacity must be willing to invest across a timeline that allows for learning, adjustment, and institutional embedding. The most effective capacity building funders treat the first year as a planning and relationship-building period and do not expect measurable capacity change until year two or three.
Flexible spending. General operating support, or broadly scoped capacity building grants that allow the organization to direct funds where they are most needed, is more effective than prescriptive line-item budgets. What an organization needs most may shift during the grant period. A budget that locks them into pre-determined activities for two years does not account for the reality that capacity gaps are interdependent and dynamic.
Peer learning. Cohort-based models, where grantees work alongside other organizations facing similar challenges, are consistently more effective than individual consultant engagements. Peer learning creates networks. Networks persist after the grant ends. A strategic plan written by a consultant gathers dust; a relationship with another executive director who solved the same problem lasts for years.
Embedded technical assistance. When the funder provides technical assistance that helps the grantee manage the grant itself — compliance support, financial reporting guidance, budget modification help — it reduces the administrative burden rather than adding to it. This inverts the typical model: instead of the grant creating compliance requirements that the grantee must meet alone, the funder absorbs a portion of the compliance work as part of its investment.
Honest evaluation. “Did this organization get stronger?” is a better evaluation question than “did the organization complete the activities listed in the work plan?” Outcome-oriented evaluation that focuses on organizational health indicators — revenue diversification, staff retention, audit outcomes, board engagement, community trust — respects the complexity of organizational development. It also costs less than elaborate pre-post assessment instruments, because much of the data already exists in the organization's financial and operational records.
The Ecosystem View
Capacity building is not just an individual-organization problem. It is a systems problem, and the systems dynamics are self-reinforcing in ways that widen inequality rather than narrow it.
The organizations that need capacity building the most — small, new, under-resourced, often led by people of color, often serving rural or marginalized communities — are the ones least equipped to find, apply for, win, and administer capacity building grants. The application process itself is a capacity filter. The reporting requirements are a capacity filter. The evaluation expectations are a capacity filter. At every stage, the system selects for organizations that already have what the funding claims to provide.
The result is a bifurcation. Well-resourced organizations — established nonprofits with development staff, grant writers, existing funder relationships, and clean audit histories — receive capacity building funding and use it to become even more competitive. Under-resourced organizations, the ones the funding was designed for, cannot clear the threshold to access it. The gap widens. The funding ecosystem becomes more concentrated, not less. This dynamic is especially pronounced in rural and frontier communities, where the infrastructure deficit extends beyond individual organizations to the entire institutional support ecosystem.
Addressing this requires structural alternatives that bypass the individual-grant model:
Intermediary models. A larger, administratively capable organization receives the capacity building grant and provides direct support to smaller organizations. The intermediary absorbs the compliance burden and passes through technical assistance, training, and operational support. This works well when the intermediary has genuine relationships with the smaller organizations and is accountable to them — not just to the funder.
Community foundation pass-through. Local community foundations with existing relationships and simplified grantmaking processes can distribute capacity building funds with lighter-touch compliance. The community foundation knows the organizations. It can make judgments about readiness and need that a national funder operating from a thousand miles away cannot. The tradeoff is scale — community foundations can support a handful of organizations, not hundreds.
Earned revenue investment. Some capacity building is better funded through earned revenue than through grants. Medicaid billing optimization, fee-for-service expansion, and social enterprise development can generate sustainable operating revenue that the organization controls. Grant funding that helps an organization build its revenue capacity — not just its grant management capacity — produces more durable results.
Public infrastructure. State-funded technical assistance programs provide capacity building without per-organization grant administration. In Washington, the Washington Council for Behavioral Health and other intermediaries provide training, technical assistance, and peer learning to behavioral health organizations statewide. The individual organization does not apply for a grant, manage a grant, or report on a grant. It receives support. This model is funded at the system level rather than the organization level, and it eliminates the compliance paradox entirely.
Questions Worth Asking
The capacity building trap is not a problem that can be solved with a single policy change or a new grant program. It is embedded in the incentive structures, accountability requirements, and institutional habits of the entire funding ecosystem. But it can be mitigated, and mitigation starts with honest questions.
For Funders
Is the administrative burden of your capacity building grants proportional to the award size? Have you measured it? If a grantee spends 25 to 35 percent of a $50,000 award on compliance, reporting, and evaluation, have you achieved your goal? What would happen if you cut your reporting requirements in half for awards under $100,000? Would the sky fall, or would the organizations you fund simply have more time and money to invest in the capacity you are trying to build?
How long does genuine capacity change take? If the answer is three to five years, why are your grants one to two years? What would a five-year capacity building investment look like, and how would your board respond to a portfolio of commitments that long?
When was the last time you asked your grantees what capacity building support would actually help — and then funded what they asked for, rather than what your program framework prescribed?
For Grantees
Before pursuing a capacity building grant, calculate the true cost of administration. Add up the staff time for compliance, reporting, evaluation, and funder communication. Subtract that from the award amount. Is the net investment — the money and time that will actually go to building capacity — worth the effort? Sometimes the answer is yes, and the grant is a genuine investment. Sometimes the answer is no, and the most strategic decision is to decline the opportunity and invest the staff time in other forms of organizational development.
What capacity do you actually need, and is a grant the best way to get it? Peer networks, shared services arrangements, pro bono professional support, and earned revenue strategies may build capacity more effectively than a formal grant — without the compliance overhead.
If you do pursue capacity building funding, negotiate. Ask the funder about reporting flexibility. Propose a simplified evaluation approach. Request a multi-year timeline. Funders who are serious about capacity building will engage with these conversations. Funders who are not will tell you a lot about their priorities by how they respond.
For Both Sides of the Table
The most honest capacity building investment is unrestricted general operating support paired with a genuine conversation about how the organization plans to use it. Not a logic model. Not a work plan with quarterly milestones. A conversation. Trust-based philanthropy is not naive. It is efficient. It eliminates the compliance overhead that erodes capacity building grants from the inside. It respects the organization's knowledge of its own needs. And it produces results that are at least as strong as — and often stronger than — heavily structured capacity building programs.
This is not an argument against accountability. It is an argument against accountability structures that cost more than they are worth. When the overhead of verifying that a grant was well-spent exceeds the marginal value of that verification, the system is working against its own goals. Funders and grantees both know this. The question is whether they are willing to act on it.
A Structural Problem Requires Structural Thinking
The capacity building trap is, at its core, a story about good intentions meeting institutional inertia. Funders want to strengthen organizations. Organizations want to get stronger. The money flows. The reports get written. The deliverables get completed. And at the end of the grant period, many organizations are not meaningfully stronger — because the grant consumed the capacity it aimed to create.
This is not an indictment of capacity building as a concept. It is an observation that the delivery mechanism — the competitive grant, with its application requirements, compliance structures, reporting obligations, and short timelines — is often a poor fit for the goal. Building organizational capacity is slow, relational, context-dependent work. Competitive grants are fast, transactional, and standardized. The mismatch produces the trap.
Escaping it requires funders willing to experiment with proportional compliance, longer timelines, and trust-based approaches. It requires grantees willing to be honest about whether a particular grant will genuinely help or just keep them busy. And it requires an ecosystem willing to invest in public infrastructure — technical assistance programs, intermediary organizations, peer learning networks — that provides capacity building support without requiring every small organization to manage its own capacity building grant.
The organizations holding communities together deserve better than a funding model that creates the problems it claims to solve. They deserve investment that actually builds what it promises to build. Getting there requires both sides of the table to name the trap — and then, together, to design a way out.