The Grant Readiness Report · #9

You Can’t Report What You Can’t Measure

How to assess data readiness before you apply — not after the reporting template arrives.

16 min read · February 2026

The Award Letter Problem

The award letter arrives. The executive director sends an all-staff email. The board chair posts on LinkedIn. The program team starts hiring.

Then, three weeks later, the program officer sends the reporting template.

It requires unduplicated client counts by age, race, ethnicity, gender, and payer source. Service encounters by type, duration, and provider credential. Outcome measures with pre/post comparison at intake, six months, and discharge. Expenditure reports by budget category with match documentation and cost-per-service-unit calculations.

The program director opens the template and starts mapping it to what the organization actually captures. The EHR records diagnoses and visit dates. Billing tracks CPT codes and payer. The intake form collects name, date of birth, address, insurance, and emergency contact. The scheduling system shows appointment times and provider names.

None of these systems can produce a report that matches the template. The EHR does not capture the evidence-based practice used in each session. Billing does not distinguish between grant-funded and non-grant-funded encounters. The intake form does not ask about primary language, disability status, veteran status, housing status, or federal poverty level. And nothing in the entire data infrastructure links a client's intake PHQ-9 score to their six-month PHQ-9 score in a way that produces the pre/post outcome comparison the funder requires.

This is the data readiness gap. It is not a technology problem — every major EHR and financial platform can capture the elements funders need. The gap is a planning problem: the organization did not assess its data infrastructure against the grant's reporting requirements before it applied, and now it is discovering that its systems cannot produce what the funder will demand.

The cost of closing this gap after the grant starts is five to ten times what it would have cost before. Retroactive EHR configuration means re-entering data for clients already served. Custom report development under deadline pressure means premium consulting rates. Staff training mid-program means inconsistent data for the first reporting period — the one the program officer scrutinizes most closely.

Data readiness should be assessed before the application is written. It almost never is.

What Funders Actually Want to Know

Strip away the acronyms and reporting templates, and every funder is asking four questions. The data infrastructure required to answer them is not the same as the infrastructure for clinical care, billing, or general operations.

Who did you serve? Demographics, eligibility status, geographic distribution. This question serves two purposes: accountability (did you reach the target population?) and equity (what does your service population look like by race, ethnicity, language, gender, age, disability, and other dimensions?). The data this requires goes well beyond what most intake processes collect. Race and ethnicity in the categories the funder specifies — which may follow OMB standards, HRSA's categories, or a program-specific taxonomy — are less consistently captured than name and insurance. Primary language, disability status, veteran status, housing status, and income relative to the federal poverty level are frequently required and frequently missing. Every field you do not collect at intake is a field you cannot report at the end of the quarter.

What did you do? Service type, frequency, intensity, duration. This is the dosage question, and it matters because funders are testing a theory of change. If the evidence base says twelve sessions of CBT produce measurable improvement in depression, the funder wants to know whether your clients actually received twelve sessions or averaged four because of no-shows. Answering this requires encounter data at a granularity that scheduling and billing systems often do not capture. Your schedule shows a 60-minute appointment. The funder wants to know: individual or group? CBT, DBT, or motivational interviewing? Psychiatrist or counselor-in-training? If your systems do not capture these distinctions, your staff will reconstruct them from memory at the end of the reporting period — inaccurately.

What happened? Outcomes, changes, improvements. This is the question funders care about most and organizations struggle with most. It ranges from simple (screening completion rates, program retention) to complex (pre/post symptom reduction on validated instruments, housing stability over time). Outcome measurement requires structured assessment data collected at defined intervals — the PHQ-9 for depression, the GAD-7 for anxiety, the AUDIT for alcohol use — administered at intake, at regular intervals, and at discharge. For SAMHSA-funded programs, the GPRA/NOMS data collection is mandatory and specifies the exact instruments and intervals.

What did it cost? Expenditures by budget category, cost per client, cost per service unit, match and leverage documentation. The SF-425 Federal Financial Report and program-specific financial reports require spending broken into the categories defined in the approved budget. Your financial system must track actual expenditures by grant and by budget line item, by reporting period. This is fund accounting — fundamentally different from the departmental or functional accounting that most general ledgers are designed to support.

The Five Common Data Gaps

The same five data gaps appear with remarkable consistency across grant-funded programs. They are predictable, identifiable before the grant starts, and fixable — if you know to look for them.

Gap 1: Demographic Completeness

Your intake process collects what you need for clinical care and billing: name, date of birth, address, insurance. The funder needs race, ethnicity, primary language, disability status, veteran status, housing status (HUD's categories, not a free-text field), insurance type (matching the funder's payer categories), and household income relative to the federal poverty guidelines.

The gap is not that your intake form cannot accommodate these fields. It is that nobody added them, nobody trained staff to ask the questions appropriately, and nobody built the workflow that ensures the data reaches a reportable database field rather than sitting on a paper form in a file cabinet.

For HRSA-funded health centers, the UDS requires demographics in specific categories that must reconcile across multiple tables. A patient counted in Table 3A (patients by age and sex) must also appear correctly in Table 3B (race, ethnicity, language) and Table 4 (income and insurance). If your EHR captures race in categories that do not map to HRSA's taxonomy, you have a data translation problem that no amount of last-minute report building will solve.

Gap 2: Service Encounter Granularity

Your scheduling system records that Client A saw Provider B on Tuesday at 2:00 PM for 60 minutes. Your billing system records that the visit generated a CPT code 90837 (psychotherapy, 60 minutes) billed to Medicaid.

The funder wants to know: Was this individual therapy or family therapy? Was the evidence-based practice CBT, and if so, was it trauma-focused CBT or standard CBT? Was the provider operating within the scope of the SAMHSA grant, or was this a Medicaid-only encounter? Did the session include a care coordination component? Was a standardized assessment administered during the session?

These are not exotic data elements. They are standard grant reporting requirements. But they require fields that billing and scheduling systems do not natively capture, because billing and scheduling were designed to answer different questions — “what can we bill for?” and “when is the next opening?” — not “what did we do, in the terms the funder uses to define services?”

Gap 3: Unduplicated Client Counts

“How many unique individuals did your program serve this quarter?” Simple question. Difficult execution. What if the same person receives services at two sites? Enrolled, dropped out, and re-enrolled? Has their name spelled differently in two systems?

Unduplicated counts require a reliable, persistent client identifier that works across programs, sites, time periods, and data systems. If your behavioral health program and primary care program use different client ID systems, producing an unduplicated count across both requires a master client index — or a manual deduplication process that is labor-intensive and error-prone.

For organizations participating in Washington's HMIS, the problem is compounded by cross-agency data sharing. HUD's Continuum of Care and Emergency Solutions Grant programs require system-level unduplicated counts across all providers in the continuum. Your data must be clean enough to deduplicate against other organizations' entries.

Gap 4: Outcome Measurement

A SAMHSA grant requires that you administer the PHQ-9 at intake, every 90 days during treatment, and at discharge. Your clinical staff administer it at intake — they are trained to do so, and the intake workflow includes it. At 90 days, some clinicians administer it reliably. Others forget, or the client cancels the 90-day appointment and the assessment does not happen at the rescheduled visit. At discharge — which in behavioral health is often unplanned, the client simply stops coming — no discharge assessment occurs because there is no discharge encounter.

The result: you have intake PHQ-9 scores for 95% of clients, 90-day scores for 60%, and discharge scores for 30%. Your outcome story — “clients showed a mean reduction of X points on the PHQ-9” — is based on the 30% of clients for whom you have complete pre/post data. The funder will ask about the other 70%. If your answer is “we don't have the data,” you have not demonstrated that your program did not work. You have demonstrated that you cannot measure whether it worked. For a funder, that is nearly as damaging.

SAMHSA's GPRA/NOMS requirements specify a 80% follow-up rate target. Programs that cannot achieve this rate face consequences ranging from increased technical assistance to reduced funding. The data infrastructure to support follow-up — automated reminders, outreach workflows for clients who miss assessment windows, administrative systems for tracking which assessments are due for which clients — must be built before the first client enrolls.

Gap 5: Financial Tracking at the Program Level

Your general ledger tracks expenses by account code: salaries, benefits, rent, supplies, travel, contractual. Your funder wants expenses by grant and by budget line item within the grant: personnel (broken into salary and fringe by position), contractual services (by contract), supplies (by type), travel (by trip), other direct costs, and indirect costs at your approved rate.

This is fund accounting — tracking every dollar to its funding source and budget category. Without it, producing a federal financial report (SF-425) requires manual reconstruction: pulling salary records, allocating shared costs, mapping account codes to budget categories, reconciling to your draw-downs from the Payment Management System. This reconstruction is time-consuming, error-prone, and produces results that auditors can easily challenge.

For organizations holding multiple grants, fund accounting is not optional. It is the infrastructure that makes accurate reporting possible across all funding streams simultaneously.

The EHR Problem

Electronic health records are designed for clinical care and billing. They are not designed for grant reporting. This is not a flaw — it reflects the EHR market, where revenue cycle management drives purchasing decisions.

Configuration matters more than product. Epic, Oracle Health (formerly Cerner), eClinicalWorks, NextGen, athenahealth — any major platform can capture the data elements funders require, if configured correctly. Out-of-the-box configurations rarely align with grant reporting needs. Grant-specific data elements (program enrollment flags, EBP designations, grant-funded visit indicators, custom demographic fields) require custom fields, modified intake workflows, and new reports. This configuration requires someone who understands both the clinical data model and the grant reporting requirements — a skill set that is genuinely rare.

Custom fields added after the grant starts create permanent data quality problems. Add a “housing status” field three months into a 12-month grant, and you have three months of clients with no housing status data. Retroactive data entry is unreliable, and for your first quarterly report, the gap will be visible.

Report building is the final bottleneck. Extracting data in the funder's required format — unduplicated counts in the correct demographic categories, outcome measures with the correct pre/post logic — requires custom reports that must be built, tested, validated, and maintained through EHR updates. Building a single complex grant report can take 40 to 80 hours of analyst time.

HRSA UDS reporting is a category of its own. The Uniform Data System requires health centers to report across more than a dozen tables covering demographics, diagnoses, services, staffing, quality measures, and finances — all internally reconciled. This complexity is why most FQHCs — as the FQHC compliance article in this series details — use specialized UDS reporting modules rather than attempting to extract directly from the clinical database.

Building Data Readiness Into Grant Planning

Data readiness is not a separate workstream from grant planning. It is a core component of the go/no-go decision. An organization that cannot produce the required reports should either budget to build the infrastructure or decline to apply.

Six steps, each actionable before the application is submitted.

Step 1: Pre-application data assessment. Before writing a single narrative paragraph, obtain the reporting requirements. Every NOFO includes them. Some funders publish actual reporting templates. SAMHSA publishes its GPRA/NOMS data collection instruments. HRSA publishes UDS table specifications. Map every required data element to your current systems. For each element, answer three questions: Can we capture this today? Where does it live? Can we extract it in the format the funder requires? Any “no” is a gap that must be addressed in the budget and implementation plan.

Step 2: Budget for data infrastructure. EHR configuration, custom report development, data analyst time, assessment tool licensing, staff training — these are all legitimate, allowable grant costs. A budget that includes a line item for “EHR configuration and report development” signals an organization that has thought through implementation. A budget with no data infrastructure costs signals one that has not.

Step 3: Configure before you serve. The first client enrolled in the grant-funded program should generate complete, reportable data. All EHR configuration, intake form modifications, assessment tool setup, and staff training must be completed during the startup period — before clinical services begin. If the grant includes a ramp-up period, this is the highest-priority use of that time.

Step 4: Build the report template on day one. Do not wait for the reporting deadline. Create a blank version of every required report at the start of the grant period. Populate it with test data to verify that extraction, aggregation, and formatting logic works. If it does not work with test data, it will not work under deadline pressure.

Step 5: Assign a data steward. One person — not a committee, not “whoever has time” — must own data quality for the grant. This person reviews completeness weekly, catches errors before they compound, and bridges clinical staff who generate the data and reporting staff who submit it. In a small organization, this may be the program director. In a larger one, a quality improvement coordinator or dedicated grants data specialist.

Step 6: Plan for the real world. Clients miss appointments. Clinicians forget assessments. Staff check the wrong box. The question is not whether data quality problems will occur but whether you catch them in week two or month eleven.

Build data quality checks into your operational rhythm. Weekly: review new client records for demographic completeness. Monthly: run the outcome assessment compliance report and follow up on missed assessments. Quarterly: produce a draft of the funder report and review it for anomalies. The organizations that produce clean reports on time are not the ones with perfect data — they are the ones that catch and correct problems continuously.

The Washington State Data Landscape

Washington's health and human service providers operate in a data environment that amplifies every challenge described above.

HCA behavioral health reporting. The Health Care Authority's Behavioral Health Administration (BHA) requires encounter data, outcomes data, and financial data from contracted providers — using state-specific formats, timelines, and data definitions that may not align with federal reporting for the same services. A provider delivering SAMHSA-funded substance use treatment while also contracted with HCA for Medicaid behavioral health is reporting on the same clinical work to two entities, in two formats, using two outcome frameworks.

HMIS for homeless services. Organizations receiving HUD Continuum of Care (CoC) or Emergency Solutions Grant (ESG) funding must enter client-level data into the Homeless Management Information System. Washington's HMIS has specific data quality standards that providers must meet. HMIS data also feeds the Point-in-Time count, the Housing Inventory Count, and the Annual Performance Report — each with its own specifications.

DOH registries and surveillance systems. The Department of Health maintains the immunization registry (IIS), vital records, communicable disease surveillance, and other public health data systems — each with its own format, submission method, and compliance requirements.

The data burden multiplication. A Washington community health center simultaneously reports to: HRSA (UDS, annually); SAMHSA (GPRA/NOMS, at intake/discharge/follow-up); HCA (behavioral health encounters, monthly); CMS (quality measures, quarterly and annually); DOH (immunization, disease reports, vital records); and private foundations (custom templates, semi-annually).

Six reporting destinations. Six data formats. Six timelines. Six definitions of what constitutes a “service encounter” or an “unduplicated client.” The underlying clinical reality is the same — the same providers seeing the same patients — but the reporting infrastructure must transform that reality into six different data products. As the state-federal overlap article in this series describes, this is the normal operating environment.

An organization that builds its data infrastructure to satisfy only one funder will manually reconstruct data for every other funder. An organization that captures the superset of all required elements can produce all required reports from a single, well-structured source. The upfront investment is significant. The alternative — perpetual manual reconstruction under deadline pressure — is more expensive and less reliable.

Data Readiness Is Grant Readiness

An organization that cannot produce clean, complete, timely reports will struggle with every grant it holds. Program quality will not save it. If the funder asks for unduplicated client counts by race and ethnicity and your system cannot produce them, you have a compliance problem — regardless of how many lives your program changed.

This is not an argument for valuing data over mission. It is a recognition that in the grant-funded world, data is the mechanism through which mission becomes visible. You cannot demonstrate impact without measuring it. You cannot measure it without capturing the right data. You cannot capture the right data without systems configured to do so.

Data readiness is structural readiness applied to information systems. It follows the same logic: assess before you apply, invest before you need it, maintain continuously rather than scrambling at deadlines. Use the WA Readiness Checklist to evaluate where your data infrastructure stands today.

The practical steps are concrete. Map your reporting requirements before you write the application. Budget for EHR configuration, report development, and analyst time. Configure your systems before the first client walks in. Build your report templates on day one. Assign someone to own data quality. Check your data weekly, not annually.

None of this is glamorous. None of it appears in the program narrative that wins the grant. But it is the infrastructure that determines whether the grant you win becomes a success story you can document — or a compliance headache you cannot escape.

Talk to us about grant compliance

Weave tracks deadlines, pre-fills reports, and monitors compliance across your grant portfolio. See how it works.