Making Evaluation Smarter


In the ten years I’ve been working on evaluation with grantees of the U.S. Department of Education’s Gaining Early Awareness and Readiness for Undergraduate Programs, or GEAR UP, a lot of trends have come and gone. They aim toward a good end – making sure the federal government is spending its money well – but sometimes miss the essential question: how to improve the information the Department of Education asks for.

FHI 360 helps the Pennsylvania System of Higher Education and Fayetteville State University in North Carolina track a cohort of students starting in middle school. We look at students’ demographic information, academic and assessment performance, participation in GEAR UP activities, and other indicators that students have the skills and resources to achieve in postsecondary schools. At the end of every year, our grantees report to the Department of Education. The grantees say how many students they’ve served, and whether the students have reached the grant’s performance measures.

But all that report achieves is accountability.

Don’t get me wrong. Accountability is important. It can show whether a program has done what it said it was going to do. But as an evaluator, I’m aware that the quality of data we’re getting isn’t perfect. How do we separate GEAR UP’s effects from the effects of other college readiness programs that are targeting the same group of students? Even the best accountability methods can’t filter out the variety of influences that students are subject to.

More importantly, as an education professional and a citizen, I want to know more than whether a grantee counted as many beans as it said it would. I want to know if what the grantee is doing is really working. Secretary of Education Arne Duncan recently raised a similar point at the NCCEP/GEAR UP National Conference in Washington, DC. He asked, in effect, are we making change? And if so, is it good change?

Ironically, one of the best ways to find out the real impact of a program is to look at what happens to the initiative when the federal funding runs out. GEAR UP – like many federal initiatives – has a built-in end, when the cohort of students either finishes high school or their first year of postsecondary education. But what if the Department of Education also retrospectively reviewed grantees that, during their run, were considered successful?

Evaluators might ask: Were those education systems able to find funding to continue offering services? Were some of GEAR UP’s successful practices built in to the system’s model? Did students who were too late for GEAR UP get the skills and resources to achieve in postsecondary schools? In brief, what is left of the program, and are its positive achievements still standing?

These questions allow the Department of Education and the public to understand more than whether one college prep program worked for six or seven years. It helps us achieve the bigger goal: to build widespread career and college readiness structures that are sustainable over time.

No Responses

More From This Series